Está en la página 1de 100

PLAN GLOBAL Y ANALITICO

ENGLISH III

E-mail: estelagrilo.s@fcyt.umss.edu.bo

DOCENTE: LIC.MA.
ESTELA GRILO
SALVATIERRA

GESTION 2023
Cochabamba - Bolivia
PLAN GLOBAL 2023
I. Identificación.
CARRERA: ING. SISTEMAS - LIC.FISICA

ASIGNATURA: INGLES III GRUPO: 1

SIGLA: Código: COD_SIS: 1803009 NIVEL (AÑO/SEMESTRE):


SEMESTRE I/ 2023
PRE-REQUISITOS:
1. Inglés I
2. Inglés II

AREAS DE COORDINACION CURRICULAR DIA HORARIO AULA


MARTES 14:15 – 15:45 693 A
VERTICAL HORIZONTAL
Nivel III del Plan de Coordina con las
Estudios diferentes áreas y
asignaturas.

VIERNES 15:45 – 17:15 693 B

NOMBRE DEL DOCENTE: LIC. MARIA ESTELA GRILO SALVATIERRA


E-mail: estelagrilo.s@fcyt.umss.edu.bo
Cel. 72750879

II. Justificación General.


La materia de inglés III en las carreras de Lic. en Informática y Sistemas es de real
importancia debido a que el conocimiento y manejo del mismo permite a los futuros
profesionales acceder a diversas fuentes de información actualizadas las cuales, en su
mayoría están publicadas originalmente en lengua inglesa. El manejo de Idioma
inglés expone a nuestros profesionales a una fuente inagotable de información y les
permite comunicarse e intercambiar experiencias con profesionales de todo el mundo.
Por otra parte, se debe tener en cuenta que la mayoría de los lenguajes de programación
derivan del inglés, por lo que se hace necesario su comprensión, uso y la familiarización con
el léxico y la morfología en idioma inglés.
Para el logro de los objetivos planteados desde la materia de Ingles III se hace necesario el
uso de Inglés para Fines Específicos (cuya sigla en inglés corresponde a ESP (English for
Specific Purposes) tiene muchos propósitos relacionados con el tema o campo específico
donde es utilizado. Por consiguiente, es necesario entender que los estudiantes de la Carrera
de Ing. de sistemas tengan como recurso el manejo del idioma Ingles para Fines
específicos y no así solo el idioma inglés General como tal.

El manejo de léxico técnico relacionado con la Carrera De Sistemas aporta mayores


conocimientos a los estudiantes quienes podrán desenvolverse con mayor facilidad su área
de estudio, cuando estén desarrollando programas o estén en un campo laboral. Otro
aspecto positivo del manejo de Ingles para Fines Específicos es el conocimiento con el que
contaran cuando concluyan la carrera, lo cual les aportara mayores beneficios en cuanto a
sus conocimientos.

La inclusión de la asignatura de Inglés III en la Malla Curricular de las carreras de


Ingeniería informática y sistemas radica en:

⮚ La necesidad fundamental para enfrentar los desafíos del siglo XXI, en un


panorama donde predomina la globalización, la competitividad entre los sistemas
educativos, la internacionalización de las profesiones y el avance científico y
tecnológico.
⮚ Responder a las exigencias que se tienen en los procesos de acreditación y a la
necesidad de contar con mayores conocimientos sobre tecnología informática.
⮚ Lograr la excelencia en la formación de nuestros profesionales capacitados en las
áreas Lic. en Ingeniería Informática y Lic. en Ingeniería de Sistemas
conocimientos sólidos de inglés, al ser considerado un idioma universal debido a
que la mayor parte de la información relacionada a tecnología informática y
programación como: software, libros, folletos, manuales, etc. originalmente se
encuentran en idioma inglés.
⮚ Lograr un nivel de conocimiento del inglés técnico que permita a los futuros
profesionales acceder más fácilmente a becas fuera del país, participar en
seminarios internacionales, insertarse en el mundo laboral y desempeñarse con
mayor eficiencia.

⮚ El manejo de inglés por parte de los estudiantes incrementará sus oportunidades


de trabajo en el futuro
La materia de inglés III está relacionada directa o indirectamente con las diferentes
asignaturas del área tecnológica debido a la necesidad de léxico específico en Ingles para
una mejor comprensión de textos relacionados con las diferentes áreas.
III. Propósitos Generales.
a asignatura de Inglés III provee el léxico técnico en Inglés (ESP – English for Specific
Purpose) / Inglés con Fines Específicos) necesarias para que el estudiante desarrolle las
habilidades de comprensión lectora y expresión en el idioma extranjero, partiendo de los
conocimientos previos. Las estructuras, vocabulario y habilidades adquiridas en las materias
de Ingles I y II servirán como base para los nuevos conocimientos a través del uso de
diferentes herramientas tecnológicas.

Los conocimientos adquiridos en esta materia contribuyen al perfil de futuro Ing. de Sistemas
en cuanto a resolver problemas de comprensión de textos en Inglés, comprender mensajes
orales y a promover el uso del idioma en contextos laborales.

⮚ Perfeccionar la competencia comunicativa oral y escrita en sus cuatro


habilidades desarrollando una capacidad tal que le permita al estudiante
interactuar en situaciones de la vida real con fluidez y especificidad.
⮚ Promover un nivel de comprensión lectora que resulte aplicable en la lectura
extensiva de temas de Informática propuestos participando en forma activa,
flexible y respetuosa.
⮚ Adquirir estrategias para producir textos escritos sencillos y adecuados a las
distintas situaciones comunicativas, generales, disciplinares, y relacionadas con
la vida real, el contexto y las demandas del medio, manifestando una reflexión
crítica y respetuosa.
⮚ Desarrollar estrategias de comprensión lectora para que los futuros profesionales puedan
abordar textos de su área de especialidad en forma autónoma.

⮚ Sensibilizar al estudiante sobre la importancia aprender diferentes idiomas, en


particular el inglés, en relación con la profesión elegida.
IV. Objetivos Generales.
Ayudar a los estudiantes a familiarizarse con vocabulario y léxico especializado
(ESP)relacionado con la terminología utilizada en la Carrera de sistemas a través de lecturas
de la comprensión oral y escrita y el uso del idioma ingles en situaciones reales.

Los objetivos que se plantean desde la materia de inglés se tienen los siguientes:
❖ Obtener resultados que muestren la necesidad del Inglés para Fines Específicos (ESP) de una
forma más significativa y eficaz a la población estudiantil, tomando como base la Visión y
Misión de nuestras Carreras.
❖ Conocer pautas fundamentales de la sintaxis, la morfología y gramática del Ingles
❖ Adquirir y emplear terminología técnica
❖ Adecuar el material de los textos utilizando Ingles para Fines Específicos de acuerdo a las
necesidades de los estudiantes de las carreras de Informática y Sistemas.
❖ Plantear retos posibles de alcanzar y que las /los estudiantes sientan la necesidad de adquirir
un idioma que les permita contar con información actualizada que les permita mejorar sus
conocimientos.
❖ Implementar material cuyos contenidos sean reales, prácticos y posibles de lograr.

OBJETIVOS ESPECÍFICOS
Los Objetivos Específicos a ser alcanzados son los siguientes:
⮚Adquirir herramientas teóricas básicas relacionadas con el lenguaje técnico en inglés
⮚Establecer un nivel de comprensión de vocabulario y léxico básico
⮚El estudiante deberá apropiarse de conceptos que permitan consolidar sus conocimientos de
Inglés para Fines Específicos.
⮚Extraer información general y especifica de textos

⮚ Brindar elementos reflexivos que permita a los estudiantes analizar y utilizar la


gramática y comprender textos con léxico terminología.

V. Estructuración en unidades didácticas y su descripción.


NOMBRE DE LA UNIDAD 1: INTRODUCTION TO SOFTWARE ENGINEERING
DURACIÓN DE LA UNIDAD EN PERIODOS ACADÉMICOS: 6 Periodos académicos
OBJETIVOS DE LA UNIDAD:

Al concluir la unidad es estudiante, habrá logrado:


✔ Adquirir conocimientos básicos sobre la ingeniería del software y sus tipos
✔ Entender si todo software requiere ingeniería de software

CONTENIDO:
1. WHAT IS Sotware Ingeneering?
1.1. How does Software work?
1.2. Types of Software Engineering
1.2.1. Operational Software Engineering
1.2.2. Transitional Software Engineering Real time OS
1.2.3. Software Engineering Maintenance
1.3. System Software Vs Application Software

TÉCNICAS PREDOMINANTES PROPUESTAS PARA LA UNIDAD:


METODOLOGÍA DE 1. Introducción del léxico especifico relacionado con la unidad
LA ENSEÑANZA: 2. Presentación y Exposición de la Unidad con preguntas.
Para el abordaje y la 3. Lectura de comprensión
enseñanza del inglés
técnico se aplicará método EVALUACIÓN DE LA UNIDAD:
científico, cualitativo –
cuantitativo,
bibliográfico y La evaluación del uso de Ingles con Fines Específicos en la unidad
exploratorio para la será continua mediante actividades escritas y orales a lo largo del
estrategia de enseñanza, desarrollo de las diversas actividades didácticas, valorando los
el cual incluye técnicas de
diferentes aspectos cognoscitivos de los estudiantes. Las
gramática y traducción,
intermediarias y evaluaciones serán informales mediante la participación individual y
silenciosas. teniendo como grupal de los estudiantes y mediante la resolución de ejercicios
resultado una visión orales y escritos.
holística del proceso de
enseñanza aprendizaje
apoyada en la aplicación BIBLIOGRAFÍA ESPECIFICA DE LA UNIDAD:
de ejercicios online y el https://www.elprocus.com/what-are-network-devices-and-their-types/
material preparado por la https://www.techtarget.com/searchnetworking/definition/network-topol
docente. ogy https://www.geeksforgeeks.org/types-of-network-topology/

NOMBRE DE LA UNIDAD 2: OPERATING SYSTEM


DURACIÓN DE LA UNIDAD EN PERIODOS ACADÉMICOS: 6 Periodos académicos
OBJETIVOS DE LA UNIDAD:

Al concluir la unidad es estudiante, habrá logrado:


✔ Identificar con precisión vocabulario específico relacionado con OPERATING
SYSTEM Comprender textos simples con Léxico en contexto.
✔ Extraer información de videos.
✔ Utilizar ayudas visuales para lograr conceptos claros.
✔ Responder cuestionarios en base al material que se le provee.
CONTENIDO:
2. WHAT IS AN OPERATING SYSTEM?
2.1. Main Concept
2.2. Types of Operating System
2.2.1. Batch Operating System
2.2.2. Multi-Tasking/Time-sharing Operating systems
2.2.3. Real time OS
2.2.4. Distributed Operating System
2.2.5. Network Operating System
2.2.6. Mobile OS
2.3. Functions of Operating System
2.4. Features of Operating System (os)
2.5. Advantages and Disadvantages of os
2.5.1. advantage of operating system
2.5.2. disadvantages of operating system
2.6. What is kernel in Operating System?
2.6.1. What is Kernel in Operating System Features of Kernel

TÉCNICAS PREDOMINANTES PROPUESTAS PARA LA UNIDAD:


2. Introducción del léxico especifico relacionado con la unidad
METODOLOGÍA DE 2. Presentación y Exposición de la Unidad con preguntas.
LA ENSEÑANZA: 3.Exposición dialogada
Para el abordaje y la
Trabajo practi
enseñanza del inglés
técnico se aplicará método
científico, cualitativo – EVALUACIÓN DE LA UNIDAD:
cuantitativo,
bibliográfico y La evaluación del uso de Ingles con Fines Específicos en la unidad
exploratorio para la
estrategia de enseñanza, será continua mediante actividades escritas y orales a lo largo del
el cual incluye técnicas de desarrollo de las diversas actividades didácticas, valorando los
gramática y traducción, diferentes aspectos cognoscitivos de los estudiantes. Las
intermediarias y evaluaciones serán informales mediante la participación individual y
silenciosas. teniendo como
resultado una visión
grupal de los estudiantes y mediante la resolución de ejercicios
holística del proceso de orales y escritos.
enseñanza aprendizaje
apoyada en la aplicación BIBLIOGRAFÍA ESPECIFICA DE LA UNIDAD:
de ejercicios online y el
material preparado por la
https://www.elprocus.com/what-are-network-devices-and-their-types/
docente. https://www.techtarget.com/searchnetworking/definition/network-topol
ogy https://www.geeksforgeeks.org/types-of-network-topology/

NOMBRE DE LA UNIDAD 3: PROGRAMMING LANGUAGE

DURACIÓN DE LA UNIDAD EN PERIODOS ACADÉMICOS: 6 Periodos académicos


OBJETIVOS DE LA UNIDAD:
Al concluir la unidad, el estudiante habrá logrado:
✔ Identificar vocabulario técnico relacionadas con Inteligencia Artificial.
✔ Comprender conceptos en ingles técnico.
✔ Responder cuestionarios en base a conceptos técnicos sobre AI.
✔ Realizar ejercicios Prácticos individuales o en grupo.
CONTENIDO:
1.1. Computer Programming Languages
1.2. Types OF Programming Languages
1.2.1. HIGH-LEVEL LANGUAGE (MOST
COMMON)
1.2.2. LOW-LEVEL LANGUAGE
1.3. THE MOST COMMON PROGRAMMING LANGUAGES
1.3.1. PYTHON
1.3.2. JAVA
1.3.3. HTML = HYPERTEXT MARKUP
LANGUAGE
1.3.4. PHP= acronym for "PHP: Hypertext
Preprocessor"
1.3.5. C#
TÉCNICAS PREDOMINANTES PROPUESTAS PARA LA UNIDAD:
1. Introducción del léxico especifico relacionado con la unidad
2. Presentación y Exposición de la Unidad con preguntas.
3. Exposición dialogada
2. Ejercicios de Comprensión lectora y manejo de ESP
ME METODOLOGÍA (Inglés con Fines Específicos)
DE LA ENSEÑANZA:
Para el abordaje y la EVALUACIÓN DE LA UNIDAD:
enseñanza del inglés La evaluación del uso de Ingles con Fines Específicos en la unidad
técnico se aplicará método será continua mediante actividades escritas y orales a lo largo del
científico, cualitativo – desarrollo de las diversas actividades didácticas, valorando los
cuantitativo,
bibliográfico y diferentes aspectos cognoscitivos de los estudiantes. Las
exploratorio para la evaluaciones serán informales mediante la participación individual y
estrategia de enseñanza, grupal de los estudiantes y mediante la resolución de ejercicios
el cual incluye técnicas de orales y escritos.
gramática y traducción,
intermediarias y
silenciosas. teniendo como BIBLIOGRAFÍA ESPECIFICA DE LA UNIDAD:
resultado una visión
holística del proceso de https://www.youtube.com/watch?v=FhrJAi-eHwI
enseñanza aprendizaje https://www.geeksforgeeks.org/top-10-programming-languages-to-learn-in-2
apoyada en la aplicación 022/
de ejercicios online y el
https://www.snhu.edu/about-us/newsroom/stem/what-is-computer-programm
material preparado por la
docente. ing
https://www.computerhope.com/jargon/p/programming-language.htm#types
https://hackr.io/blog/best-programming-languages-to-learn
https://www.chakray.com/programming-languages-types-and-features/
https://www.pluralsight.com/blog/software-development/everything-yo
u-need-to-know-about-c-

NOMBRE DE LA UNIDAD 4 : ARTIFICIAL INTELLIGENCE

DURACIÓN DE LA UNIDAD EN PERIODOS ACADÉMICOS: 6 Periodos académicos


OBJETIVOS DE LA UNIDAD:

Al concluir la unidad, el estudiante habrá logrado:


✔ Identificar palabras técnicas relacionadas con Inteligencia Artificial.
✔ Comprender conceptos en ingles técnico.
✔ Responder con precisión cuestionarios en base a conceptos técnicos sobre AI.
✔ Realizar ejercicios Prácticos individuales o en grupo.

CONTENIDO:
3.1. Vocabulario Técnico Específico relacionado con AI
3.2. What is artificial Intelligence (AI?
3.3. How Artificial Intelligence (ai AI) Works?
3.4. Types of Artificial Intelligence
3.5. Applications of Artificial Intelligence
3.6. Use of Artificial Intelligence on a Practical Level
TÉCNICAS PREDOMINANTES PROPUESTAS PARA LA UNIDAD:
1. Introducción del léxico especifico relacionado con la unidad
2. Presentación y Exposición de la Unidad con preguntas y
ME METODOLOGÍA respuestas orales.
DE LA ENSEÑANZA: 3. Presentación de Video interactivo con cuestionario a ser
trabajado en clases.
Para el abordaje y la 4. Exposición oral (grabación de 3 minutos) expresando
enseñanza del inglés conceptos en base a la exposición del tópico.
técnico se aplicará
método científico, EVALUACIÓN DE LA UNIDAD:
cualitativo – La evaluación del uso de Ingles con Fines Específicos en la unidad
cuantitativo, será continua mediante actividades escritas y orales a lo largo del
bibliográfico y desarrollo de las diversas actividades didácticas, valorando los
exploratorio para la diferentes aspectos cognoscitivos de los estudiantes. Las
estrategia de evaluaciones serán informales mediante la participación individual y
enseñanza, el cual grupal de los estudiantes y mediante la resolución de ejercicios
incluye técnicas de orales y escritos.
gramática y
traducción, BIBLIOGRAFÍA ESPECIFICA DE LA UNIDAD:
intermediarias y https://www.techopedia.com/definition/32836/robotics
silenciosas. teniendo https://www.futurelearn.com/info/courses/begin-robotics/0/steps/2840
como resultado una https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-I
visión holística del ntelligence
proceso de Ed Burns is a former executive editor at TechTarget.
enseñanza https://www.techtarget.com/contributor/Ed-Burns. Artificial Intelligence: What
aprendizaje apoyada It Is and How It Is Used
en la aplicación de https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp By JAKE
ejercicios online y el FRANKENFIELD. Updated July 06, 2022
material preparado https://www.techopedia.com/definition/190/artificial-intelligence-ai /
por la docente. Reviewed by Margaret Rouse / Last updated: January 5, 2023 What Does
Artificial Intelligence (AI) mean?
https://www.techopedia.com/topic/87/artificial-intelligence

NOMBRE DE LA UNIDAD 5: ROBOTICS


DURACIÓN DE LA UNIDAD EN PERIODOS ACADÉMICOS: 8 Periodos académicos
OBJETIVOS DE LA UNIDAD:
Al concluir la unidad, el estudiante habrá logrado:
✔ Identificar vocabulario relacionado con Robótica.
✔ Comprender conceptos sobre Robótica en ingles técnico. (ESP)
✔ Responder con precisión cuestionarios en base a conceptos técnicos sobre Robótica.
✔ Realizar ejercicios Prácticos individuales o en grupo.
✔ Realizar la traducción de palabras técnicas.

CONTENIDO:
4.1. What is Robotics?
4.2. What is the Function of Robotics?
4.3. what is a Robot in Robotics?
4.4. Are Robotics and Artificial Intelligence the Same Thing?
4.5. Robotics Engineering
4.6. What is Difference Between Robotics and Robotic Engineering?
4.7. Is Robotics Part of AI? Is AI Part of robotics? what is the difference
between the two terms?
4.8. Aspects of Robots
4.9. Components of a Robot
4.9. Parts of a Robot
Types of Robots

METO TÉCNICAS PREDOMINANTES PROPUESTAS PARA LA UNIDAD:


METODOLOGÍA DE 1. Introducción del léxico especifico relacionado con la unidad.
LA ENSEÑANZA: 2. Definición de Inteligencia Artificial
Para el abordaje y la 3. Presentación y Exposición de la Unidad con preguntas a ser
enseñanza del inglés respondidas en forma oral o escrita.
técnico se aplicará 4. Videos interactivos con cuestionarios a ser respondidos en
método científico, clases
cualitativo – 5. presentación de diferentes tipos de robots, sus funciones y su
cuantitativo, utilidad.
bibliográfico y
exploratorio para la
estrategia de EVALUACIÓN DE LA UNIDAD:
enseñanza, el cual
incluye técnicas de La evaluación del uso de Ingles con Fines Específicos en la unidad
gramática y será continua mediante actividades escritas y orales a lo largo del
traducción, desarrollo de las diversas actividades didácticas, valorando los
intermediarias y diferentes aspectos cognoscitivos de los estudiantes. Las
silenciosas. teniendo evaluaciones serán informales mediante la participación individual y
como resultado una grupal de los estudiantes y mediante la resolución de ejercicios
visión holística del orales y escritos.
proceso de
enseñanza
aprendizaje apoyada
en la aplicación de
ejercicios online y el
material preparado
por la docente.
BIBLIOGRAFÍA ESPECIFICA DE LA UNIDAD:
Material elaborado por la docente de la asignatura de InglesIII
https://www.techopedia.com/definition/32836/robotics
https://www.futurelearn.com/info/courses/begin-robotics/0/steps/2840
https://robotical.io/blog/robot-terminology/
https://www.devopsschool.com/blog/what-is-robotics-and-what-are-the-advantages-
and- disadvantages-in-detail/
https://www.techopedia.com/definition/32836/robotics
https://robotical.io/blog/robot-terminology/
https://www.g2.com/articles/history-of-robots?hsCtaTracking=314fbdf4-ec3d-40a6-
bbdb-5141a44d5781%7C0de54460-a6b2-49cf-9256-4580d2a73c0f

NOMBRE DE LA UNIDAD 6: FUTURE TRENDS OF INFORMATION TECHNOLOGY (IT)

DURACIÓN DE LA UNIDAD EN PERIODOS ACADÉMICOS: 16 horas academicas


OBJETIVOS DE LA UNIDAD:

Al concluir la unidad, el estudiante habrá logrado :


✔ Reconocer correctamente el vocabulario técnico utilizado en cada uno de los tópicos
presentados.
✔ Utilizar el léxico correctamente.
✔ Identificar los conceptos con precisión.
✔ Comprender textos de información académica en Ingles técnico.
✔ Responder cuestionarios relacionados con los tópicos presentados.
✔ Extraer información general y específica de un video
✔ Realizar una presentación oral de un tópico en de 3 minutos.

CONTENIDO:
6.1. Artificial Intelligence AND Machine Learning
6.2. Robotic process Of Automatization /RPA
6.3. Edge Computing
What Is Cloud Computing and the Top Cloud Technologies to Look Out for in 2022
6.4. Quantum Computing
6.5. VirtuaL Reality AND Augmented Reality
6.6. Blockchain
6.7. Internet of Things
6.8. 5G
6.9. Cybersecurity
6.10. Nanotechnology
METODOLOGÍA DE TÉCNICAS PREDOMINANTES PROPUESTAS PARA LA UNIDAD:
LA ENSEÑANZA:
Para el abordaje y la 1. Identificación de léxico especifico relacionado con la unidad
enseñanza del inglés 2. Presentación y Exposición de cada Unidad con preguntas para ser
técnico se aplicará respondidas en forma oral o escrita individual o grupal.
método científico, 3. Exposición de los tópicos en grupos.
cualitativo – 4. Ejercicios de lectura de comprensión.
cuantitativo,
bibliográfico y EVALUACIÓN DE LA UNIDAD:
exploratorio para la
estrategia de La evaluación del uso de Ingles con Fines Específicos en la unidad será
enseñanza, el cual continua mediante actividades escritas y orales a lo largo del desarrollo de
incluye técnicas de las diversas actividades didácticas, valorando los diferentes aspectos
gramática y cognoscitivos de los estudiantes. Las evaluaciones serán informales
traducción, mediante la participación individual y grupal de los estudiantes y mediante la
intermediarias y resolución de ejercicios orales y escritos.
silenciosas. teniendo
como resultado una BIBLIOGRAFÍA ESPECIFICA DE LA UNIDAD:
visión holística del https://www.oracle.com/internet-of-things/what-is-iot/
proceso de
enseñanza https://www.cisco.com/c/en/us/products/security/what-is-cybersecurity.html
aprendizaje apoyada https://www.techtarget.com/searchsecurity/definition/cybersecurity
en la aplicación de https://darktrace.com/blog/the-future-of-cyber-security-2022-predictions-from
ejercicios online y el -darktrace https://www.techtarget.com/searchsecurity/definition/cybersecurity
material preparado https://darktrace.com/blog/the-future-of-cyber-security-2022-predictions-from
por la docente. -darktrace
VI. Evaluación.
La evaluación será formativa ya que se evaluará durante el desarrollo de los temas donde se
podrá observar la capacidad asimilación de conceptos fundamentales en cada unidad por
parte del estudiante. La evaluación se realizará acorde al sistema de Evaluación de la
Facultad de Ciencias y Tecnología.
❖ Dos evaluaciones Parciales y una Final tienen un valor de 100 puntos.

❖ Las notas de Primer y Segundo parcial tienen un ponderado de 100 puntos. Y el


estudiante que obtenga 51 puntos o más ya habrá aprobado la materia.
❖ Si el estudiante no obtiene un puntaje de 51 puntos deberá presentarse al Examen Final.
❖ El estudiante puede aprobar la asignatura presentándose solo al Examen Final y
obtener una nota de 51.
❖ El estudiante podrá aprobar en SEGUNDA INSTANCIA obteniendo una nota de 51.
❖ El estudiante tiene la obligación de presentarse a los exámenes en las fechas y
horas establecidas.
❖ El requisito para ingresar al examen de SEGUNDA INSTANCIA es una nota
ponderada en los dos parciales mayor o igual a 26 puntos y la presentación de la Boleta
de Segunda Instancia.
❖ Los exámenes serán reprogramados solo en caso de enfermedad donde la
certificación esté emitida por el Seguro Social Universitario (SSU).
❖ Las fechas de los exámenes serán programadas de acuerdo al Calendario Vigente.

❖Todos los exámenes serán orales y escritos.


VII. Cronograma. (Ver al final de la hoja)
Unidad 1 8 horas académicas 2 semanas
Unidad 2 8 horas académicas 2 semanas
Unidad 3 8 horas académicas 2 semanas
Unidad 4 10 horas académicas 2 ½ semanas
Unidad 5 14 horas académicas 3½ semanas

Exámenes parciales 4 horas académicas 1 semana

Devolución y revisión de los 2 horas académicas ½ semana


exámenes parciales
Examen final 2 horas académicas ½ semana
Devolución y revisión del 1 hora académica ¼ semana
examen final
Segunda instancia 2 horas académicas ½ semana

Devolución y revisión de la 1 hora académica ¼ semana


segunda instancia

VIII. Disposiciones Generales.


En la asignatura de Ingles III los estudiantes deben tomar en cuenta las siguientes disposiciones
generales:
⮚ El estudiante tiene la obligación de asistir a clases en los horarios y aulas establecidas.
⮚ El ingreso a clases debe ser en hora puntual.
⮚ Contar con su material impreso.
⮚ La entrega de trabajos prácticos es obligatoria.
⮚ Tener su dispositivo electrónico para realizar trabajos.
⮚ Tres faltas continuas o discontinuas será considerado como abandono de la materia, por lo
que el estudiante pierde el derecho a los exámenes parciales.
⮚ El trabajo virtual será realizado a través de Google Classroom donde los estudiantes tienen
la obligación de registrarse ,revisar el material así como los trabajos asignados.
⮚ Los estudiantes tienen la obligación de revisar en Google Classroom las fechas de
exámenes .
⮚ Verificar notas Parciales como Examen Final.
IX. Bibliografía General.
Cybersecurity /
https://www.cisco.com/c/en/us/products/security/what-is-cybersecurity.html#~how-
cybersecurity-works
Operating System Concept /
https://afteracademy.com/blog/what-is-kernel-in-operating-system-and-what-are-the-va
rious- types-of-kernel
Operating System /
https://www.mygreatlearning.com/blog/what-is-operating-system/#functions-of-operating
- systems
Video / https://www.youtube.com/watch?v=8kujH0nlgv
Protocol Definition :https://www.techtarget.com/searchnetworking/definition/protocol
Network : https://www.computerhope.com/jargon/n/network.htm
Network topology :https://www.heavy.ai/technical-glossary/network-topology
Types of Network:
https://www.elprocus.com/what-are-network-devices-and-their-types/
https://www.techtarget.com/searchnetworking/definition/network-topology
https://www.geeksforgeeks.org/types-of-network-topology/
Video: https://www.youtube.com/watch?v=znIjk-7ZuqI
Video: https://www.youtube.com/watch?v=614QGgw_FA4
Video :https://www.youtube.com/watch?v=FhrJAi-eHwI
https://www.geeksforgeeks.org/top-10-programming-languages-to-learn-in-2022/
Programming :https://www.snhu.edu/about-us/newsroom/stem/what-is-computer-programming
Types of programming language:
https://www.computerhope.com/jargon/p/programming-language.htm#types
https://hackr.io/blog/best-programming-languages-to-learn
https://www.chakray.com/programming-languages-types-and-features/
Programming Language: https://www.techopedia.cm/definition/32836/robotics
What is Robotic?:https://www.futurelearn.com/info/courses/begin-robotics/0/steps/2840
Artificial Intelligence:
https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence Robotic
Definition :https://www.techopedia.com/definition/32836/robotics
https://www.futurelearn.com/info/courses/begin-robotics/0/steps/2840
Artificial Intelligence Definition:
https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence
What is 5G? : https://www.techtarget.com/searchnetworking/definition/5G
5G Definition :
https://www.verizon.com/about/our-company/5g/what-5g
Pros and Cons of 5
G:https://www.gomultilink.com/blog/multilog/the-pros-cons-and-potentials-of-5g
Advantages and Disadvantages of 5G:
https://whatsag.com/5g/5g-advantages_disadvantages.php
https://www.simplilearn.com/top-technology-trends-and-jobs-article#1_computing_power
UNIT 1
WHAT IS SOFTWARE ENGINEERING?
Software engineering is a concept in and of itself, but to better understand it, you need to know
what each part of the term means before you can fully understand how they operate together. It
can be difficult to understand, even though it does seem straightforward. That is because the
pieces are more complicated than many believe - and working with software engineering for an
application is difficult and time-consuming. Software engineering has two parts: software and
engineering.
Software is a collection of codes, documents, and triggers that does a specific job and
fills a specific requirement.
Engineering is the development of products using best practices, principles, and
methods.

Software engineering leads to a product that is reliable, efficient, and effective at what it does.
While software engineering can lead to products

that do not do this, the product will almost always go back into the production stage. So, what is the
complete definition of software engineering?
The IEEE fully defines software engineering as:
1. The application of a systematic, disciplined, quantifiable approach to the development,
operation, and maintenance of software; that is, the application of engineering to software.

What the software engineering meaning doesn’t explain is that everything that has been software
engineered needs to work on real machines in real situations, not within.

Software engineering starts when there is a demand for a specific result or output for a company,
from an application. From somewhere on the IT team, typically the CIO, there is a request put into
the developer to create some sort of software. The software development team breaks down the
project into the requirements and steps. Sometimes, this work will be farmed out to independent
contractors, vendors, and freelancers. When this is the case, software engineering tools help to
ensure that all of the work done is congruent and follows best practices.

How do developers know what to put into their software? They break it down into specific needs
after conducting interviews, collecting information, looking into the existing application portfolio, and
talking to IT leaders. Then, they will build a roadmap of how to build the software. This is one of the
most important parts because much of the “work” is completed during this stage - which also means
that any problems typically occur here as well.

The true starting point is when developers begin to write code for the software. This is the longest
part of the process in many cases as the code needs to be congruent with current systems and the
language used in them. Unfortunately, these problems often aren’t noticed until much later on in the
project and then rework needs to be completed.

The code should be tested as it is written and once it has been completed – at all parts of the life
cycle. With software engineering tools, you will be able to continuously test and monitor.

READING COMPREHENSION
Read the following text , then answer the questions.

SOFTWARE ENGINEERING BASICS


The true work of software engineering begins before the product has even been designed – and the
software engineering basics dictate that it continues long after the “work” has been completed. It all
begins with a thorough and complete understanding of what your software needs to have this
includes what the software needs to do, the system in which it needs to operate, and all of the
security that it entails. Security is one of the software engineering basics because it is so essential
to all aspects of development. Without tools to help you better understand how your code is being
built and where any security problems may fall, your team can easily become lost in the
development stage.

Software engineering design basics require creating the instructions for the computer and the
systems. Much of this will take place at the coding level by professionals who have comprehensive
training. Still, it is important to understanding that software engineering isn’t always a linear
process, which means that it requires thorough vetting once it has been completed.
Not all software requires software engineering. Simplistic games or programs that are used by
consumers may not need engineering, depending on the risks associated with them. Almost all
companies do require software engineering because of the high-risk information that they store and
security risks that they pose.
Software engineering helps to create customized, personalized software that should look into
vulnerabilities and risks before they even emerge. Even when the software engineering principles
of safety aren’t required, it can also help to reduce costs and improve customer experience.

Types of Software Engineering


Software engineering studies the design, development, and maintenance of software as an
umbrella definition. Still, there are different types of software engineering that a company or
product may need. Problems tend to emerge when software is low-quality or isn’t properly vetted
before deployment.

There has been a lot of demand for software engineers because of the rate of change in user
requirements, statutes, and the platforms we use. Software engineering works on a few
different levels: Operational Software Engineering: Software engineering on the operational
level focuses on how the software interacts with the system, whether or not it is on a budget, the
usability, the functionality, the dependability, and the security.

Transitional Software Engineering: This type focuses on how software will react when it is changed
from one environment to another. It typically takes some scalability or flexibility in the
development. Software Engineering Maintenance: Recurrent software engineering focuses on how
the software functions within the existing system, as all parts of it change.

Software engineering functions at all parts of the software development lifecycle, including analysis,
design, development, testing, integration, implementation, maintenance, and even retirement.

It is important to understand that software engineering isn’t a new practice, but it is constantly
changing and can feel new on a regular basis. Software is used in everything around us, so it is
important to ensure that all software is working properly. If it does not, it can result in loss of money,
loss of reputation, and even in some cases, loss of life.

([https://www.castsoftware.com/glossary/what-is-software-engineering-definition-types-of-basics-introduction)

PRACTICE
Match the following words with their definitions:

1. High a) the range of operations that can be run on a computer or other


technology electronic system
2. engineering b) come into existence or greater prominence
3. functionality c) take or use another instead of
4. change d) instructions for a computer in some programming language, often
machine language
5. emerge e) advanced technological development, especially in electronics

6. code f) a field of study or activity concerned with modification or development


in a particular area
7. application g) a position or stage on a scale of quantity, extent, rank, or quality
developer
8. level h) а person who writes computer programs to meet specific
requirements

Read the text again and decide if the following statements are TRUE or
FALSE.

1) Software engineering is a branch of engineering that deals with the development of


software products and security problems are the most important for developers.
2) Software engineering leads to a product that is reliable, efficient, and effective at
what it does.
3) All software requires software engineering.
4) Problems tend to emerge even if software is high-quality and is properly vetted
before deployment.

5) Software engineering functions at four parts of the software development lifecycle,


including analysis, design, development, testing.
6) Software engineering isn’t a new practice, but it is constantly changing and can feel
new on a regular basis.
7) Software engineering isn’t always a linear process, which means that it requires
thorough vetting once it has been completed.
What is Software? Types of Software
Software is a set of instructions, data or programs used to operate computers and execute
specific tasks. It is the opposite of hardware, which describes the physical aspects of a
computer. Software is a generic term used to refer to applications, scripts and programs
that run on a device. It can be thought of as the variable part of a computer, while hardware is
the invariable part.

The two main categories of software are application software and system software. Application
software is a computer software package that performs a specific function for a user, or in some
cases, for another application. An application can be self-contained, or it can be a group of
programs that run the application for the user. Examples of modern applications include office
suites, graphics software, databases and database management programs, web browsers, word
processors, software development tools, image editors and communication platforms.

System software is designed to run a computer's hardware and provides a platform for
applications to run on top of. System software coordinates the activities and functions of the
hardware and software. In addition, it controls the operations of the computer hardware and
provides an environment or platform for all the other types of software to work in. The OS is the
best example of system software; it manages all the other computer programs. Other examples
of system software include the firmware, computer language translators and system utilities.

Other types of software include programming software, which provides the programming tools
software developers need; middleware, which sits between system software and applications;
and driver software, which operates computer devices and peripherals.

Language Processor: As we know that system software converts the human-readable language
into a machine language and vice versa. So, the conversion is done by the language processor.
It converts programs written in high-level programming languages like Java, C, C++, Python,
etc.(known as source code), into sets of instructions that are easily readable by machines
(known as object code or machine code).

Driver software. Also known as device drivers, this software is often considered a type of
system software. Device drivers control the devices and peripherals connected to a computer,
enabling them to perform their specific tasks. Every device that is connected to a computer
needs at least one device driver to function. Examples include software that comes with any
nonstandard hardware, including special game controllers, as well as the software that enables
standard hardware, such as USB storage devices, keyboards, headphones and printers.
Middleware. The term middleware describes software that mediates between application and
system software or between two different kinds of application software. For example, middleware
enables Microsoft Windows to talk to Excel and Word. It is also used to send a remote work
request from an application in a computer that has one kind of OS, to an application in a
computer with a different OS. It also enables newer applications to work with legacy ones.
Programming software. Computer programmers use programming software to write code.
Programming software and programming tools enable developers to develop, write, test and
debug other software programs. Examples of programming software include assemblers,
compilers, debuggers and interpreters.

How does software work?


Application software consists of many programs that perform specific functions for end users,
such as writing reports and navigating websites. Applications can also perform tasks for other
applications. Applications on a computer cannot run on their own; they require a computer's OS,
along with other supporting system software programs, to work.

These desktop applications are installed on a user's computer and use the computer memory to
carry out tasks. They take up space on the computer's hard drive and do not need an internet
connection to work. However, desktop applications must adhere to the requirements of the
hardware devices they run on.

Web applications, on the other hand, only require internet access to work; they do not rely on the
hardware and system software to run. Consequently, users can launch web applications from
devices that have a web browser. Since the components responsible for the application
functionality are on the server, users can launch the app from Windows, Mac, Linux or any other
OS.
System software sits between the computer hardware and the application software. Users do not
interact directly with system software as it runs in the background, handling the basic functions of
the computer. This software coordinates a system's hardware and software so users can run
high-level application software to perform specific actions. System software executes when a
computer system boots up and continues running as long as the system is on.
DESIGN AND IMPLEMENTATION

The software development lifecycle is a framework that project managers use to describe the
stages and tasks associated with designing software. The first steps in the design lifecycle are
planning the effort and then analyzing the needs of the individuals who will use the software and
creating detailed requirements. After the initial requirements analysis, the design phase aims to
specify how to fulfill those user requirements.

The next is step is implementation, where development work is completed, and then software
testing happens. The maintenance phase involves any tasks required to keep the system running.
The software design includes a description of the structure of the software that will be
implemented, data models, interfaces between system components and potentially the algorithms
the software engineer will use.

The software design process transforms user requirements into a form that computer
programmers can use to do the software coding and implementation. The software engineers
develop the software design iteratively, adding detail and correcting the design as they develop it.

The different types of software design include the following:


Architectural design. This is the foundational design, which identifies the overall structure of the
system, its main components and their relationships with one another using architectural design
tools.
High-level design. This is the second layer of design that focuses on how the system, along with
all its components, can be implemented in forms of modules supported by a software stack. A
high-level design describes the relationships between data flow and the various modules and
functions of the system.

Detailed design. This third layer of design focuses on all the implementation details necessary
for the specified architecture.
HOW TO MAINTAIN SOFTWARE QUALITY?

Software quality measures if the software meets both its functional and nonfunctional
requirements.
Functional requirements identify what the software should do. They include technical details,
data manipulation and processing, calculations or any other specific function that specifies what
an application aims to accomplish.

Nonfunctional requirements -- also known as quality attributes -- determine how the system should
work. Nonfunctional requirements include portability, disaster recovery, security, privacy and
usability.
Software testing detects and solves technical issues in the software source code and assesses the
overall usability, performance, security and compatibility of the product to ensure it meets its
requirements.
The dimensions of software quality include the following characteristics:

Accessibility. The degree to which a diverse group of people, including individuals who
require adaptive technologies such as voice recognition and screen magnifiers, can
comfortably use the software.
Compatibility. The suitability of the software for use in a variety of environments, such as with
different OSes, devices and browsers.
Efficiency. The ability of the software to perform well without wasting energy, resources, effort,
time or money.
Functionality. Software's ability to carry out its specified functions.
Installability. The ability of the software to be installed in a specified environment.
Localization. The various languages, time zones and other such features a software can
function in.
Maintainability. How easily the software can be modified to add and improve features, fix
bugs, etc.
Performance. How fast the software performs under a specific load.
Portability. The ability of the software to be easily transferred from one location to another.
Reliability. The software's ability to perform a required function under specific conditions for a
defined period of time without any errors.
Scalability. The measure of the software's ability to increase or decrease performance in
response to changes in its processing demands.
Security. The software's ability to protect against unauthorized access, invasion of privacy,
theft, data loss, malicious software, etc.
To maintain software quality once it is deployed, developers must constantly adapt it to
meet new customer requirements and handle problems customers identify. This
includes improving functionality, fixing bugs and adjusting software code to prevent
issues. How long a product lasts on the market depends on developers' ability to keep
up with these maintenance requirements.
When it comes to performing maintenance, there are four types of changes developers can
make, including:
Corrective. Users often identify and report bugs that developers must fix, including coding
errors and other problems that keep the software from meeting its requirements.
Adaptive. Developers must regularly make changes to their software to ensure it is
compatible with changing hardware and software environments, such as when a new
version of the OS comes out.

Perfective. These are changes that improve system functionality, such as improving the
user interface or adjusting software code to enhance performance.
Preventive. These changes are done to keep software from failing and include tasks such
as restructuring and optimizing code.

Software licensing and patents


A software license is a legally binding document that restricts the use and distribution of
software.
Typically, software licenses provide users with the right to one or more copies of the software
without violating copyright. The license outlines the responsibilities of the parties that enter
into the agreement and may place restrictions on how the software can be used.

Software licensing terms and conditions generally include fair use of the software, the
limitations of liability, warranties, disclaimers and protections if the software or its use
infringes on the intellectual property rights of others.

Licenses typically are for proprietary software, which remains the property of the
organization, group or individual that created it; or for free software, where users can run,
study, change and distribute the software. Open source is a type of software where the
software is developed collaboratively, and the source code is freely available. With open
source software licenses, users can run, copy, share and change the software similar to free
software.
Although copyright can prevent others from copying a developer's code, a copyright cannot
stop them from developing the same software independently without copying. A patent, on
the other hand, enables a developer to prevent another person from using the functional
aspects of the software a developer claims in a patent, even if that other person developed
the software independently.

In general, the more technical software is, the more likely it can be patented. For example, a
software product could be granted a patent if it creates a new kind of Database
structure or enhances the overall performance and function of a computer.
[https://searchapparchitecture.techtarget.com/definition/software]

UNIT 2
WHAT IS AN OPERATING SYSTEM?
GLOSSARY

WORD CONCEPT
Compression
A method of packing data in order to save disk storage space or download
time Zip and mp3 are examples of two common file
compression algorithms.
Device driver Software which converts the data from a component or peripheral into data
that an operating system can use
GUI (graphical An icon based link between a computer and its operator
user interface)
Most users prefer an icon-based GUI over a command line option.
Kernel
The fundamental part of an operating system responsible resource
management and file access .

Linux Linux was originally developed by Linus Torvalds, who wanted a free Unix-like
operating system that ran on standard PC hardware.

Multitasking Concurrent execution of two or more tasks by a processor.

Sign in to enter information related to an account name and its password in order to
access a computer resource.

Sign out To end a session with computer or network resource.

Operating Software is a set of instructions, data or programs used to operate computers


Systems and execute specific tasks.
binary Binary describes a numbering scheme in which there are only two
possible values for each digit -- 0 or 1 -- and is the basis for all binary code
used in computing systems.
Software A generic term describing all kinds of computer programs, applications and
operating systems
batch It is a system or mode of operation in which inputs are collected and
processed all at one time, rather than being processed as they arrive, and a
job, once started, proceeds to completion without
additional input or user interaction.
1. WHAT IS AN OPERATING SYSTEM?
Updated By Great Learning Team -Aug 4, 2022

An Operating System (OS) is the most important program that is first loaded on a
computer when you switch on the system. Operating System is system software. The
communication between a user and a system takes place with the help of operating
systems.

Windows, Linux, and Android are examples of operating systems that enable the user to
use programs like MS Office, Notepad, and games on the computer or mobile phone. It is
necessary to have at least one operating system installed in the computer in order to run
basic programs like browsers.

Operating System - Examples

There are plenty of Operating Systems available in the market which include paid and
unpaid (Open Source). Following are the examples of the few most popular Operating
systems:

Windows: This is one of the most popular and commercial operating systems developed and
marketed by Microsoft. It has different versions in the market like Windows 8, Windows 10 etc
and most of them are paid.
● Linux This is a Unix based and the most loved operating system first released on
September 17, 1991 by Linus Torvalds. Today, it has 30+ variants available like Fedora,
OpenSUSE, CentOS, UBuntu etc. Most of them are available free of charges though you
can have their enterprise versions by paying a nominal license fee.
● MacOS This is again a kind of Unix operating system developed and marketed by
Apple Inc. since 2001.

● iOS This is a mobile operating system created and developed by Apple Inc.
exclusively for its mobile devices like iPhone and iPad etc.

● Android This is a mobile Operating System based on a modified version of the


Linux kernel and other open source software, designed primarily for touchscreen mobile
devices such as smartphones and tablets.

2. FUNCTIONS OF OPERATING SYSTEM

2.1. Processor Management: An operating system manages the processor’s work


by allocating various jobs to it and ensuring that each process receives enough time from
the processor to function properly.

2.2. Memory Management: An operating system manages the allocation and de


allocation of the memory to various processes and ensures that the other process does
not consume the memory allocated to one process.
2.3. Device Management: There are various input and output devices. An OS
controls the working of these input-output devices. It receives the requests from these
devices, performs a specific task, and communicates back to the requesting process.

2.4. File Management: An operating system keeps track of information regarding


the creation, deletion, transfer, copy, and storage of files in an organized way. It also
maintains the integrity of the data stored in these files, including the file directory structure,
by protecting against unauthorized access.

2.5. Security: The operating system provides various techniques which assure the
integrity and confidentiality of user data. Following security measures are used to
protect user data:

o Protection against unauthorized access through login.

o Protection against intrusion by keeping Firefall active.

o Protecting the system memory against malicious access.

o Displaying messages related to system vulnerabilities.

2.6. Error Detection: From time to time, the operating system checks the system
for any external threat or malicious software activity. It also checks the hardware for
any type of damage. This process displays several alerts to the user so that the
appropriate action can be taken against any damage caused to the system.

2.7. Job Scheduling: In a multitasking OS where multiple programs run


simultaneously, the operating system determines which applications should run in which
order and how time should be allocated to each application.

3. FEATURES OF OPERATING SYSTEMS


Some important features or characteristics of operating systems are:

1. Provides a platform for running applications


2. Handles memory management and CPU scheduling
3. Provides file system abstraction
4. Provides networking support
5. Provides security features
6. Provides user interface
7. Provides utilities and system services 8.Supports application development

4. COMPONENTS OF OPERATING SYSTEM

The operating system has two components:

● Shell
● Kernel

4.1. WHAT IS SHELL?


It handles user interactions. It is the outermost layer of the OS and manages the interaction
between user and operating system by:

⮚ Prompting the user to give input


⮚ Interpreting the input for the operating system
⮚ Handling the output from the operating system.

Shell provides a way to communicate with the OS by either taking the input from the user
or the shell script. A shell script is a sequence of system commands that are stored in a
file.

4.2. WHAT IS KERNEL?


The kernel is the core component of an operating system for a computer (OS). All other
components of the OS rely on the core to supply them with essential services. It serves as
the primary interface between the OS and the hardware and aids in the control of devices,
networking, file systems, and process and memory management.

4.2.1. FUNCTIONS OF KERNEL

The kernel is the core component of an operating system which acts as an interface between
applications, and the data is processed at the hardware level.

When an OS is loaded into memory, the kernel is loaded first and remains in memory until the
OS is shut down. After that, the kernel provides and manages the computer resources and
allows other programs to run and use these resources. The kernel also sets up the memory
address space for applications, loads the files with application code into memory, and sets up
the execution stack for programs.

The kernel is responsible for performing the following tasks:

● Input-Output management
● Memory Management
● Process Management for application execution.
● Device Management
● System calls control

4.2.2. TYPES OF KERNEL

4.2.2 .1. Monolithic Kernel : A monolithic kernel is a single large program that contains all
operating system components. The entire kernel executes in the processor’s privileged mode
and provides full access to the system’s hardware. Monolithic kernels are faster than
microkernels because they don’t have the overhead of message passing.

4.2.2.2 Microkernel : A microkernel is a kernel that contains only the essential components
required for the basic functioning of the operating system.

4.2.2.3. Hybrid Kernel : A hybrid kernel is a kernel that combines the best features of
both monolithic kernels and microkernels.

4.2.2.4. Exokernel An exokernel is a kernel that provides the bare minimum components
required for the basic functioning of the operating system.

5. TYPES OF OPERATING SYSTEM


There are several different types of operating systems present.
▪ Batch OS
▪ Distributed OS

▪ Multitasking OS

5.1. BATCH OS
▪ Network OS
▪ Real-OS
▪ Mobile OS
Batch OS does not directly interact with the computer. Instead, an operator takes up similar jobs
and groups them together into a batch, and then these batches are executed one by one based
on the first-come, first, serve principle.
5.2. DISTRIBUTED OS
A distributed OS is a recent advancement in the field of computer technology and is utilized all
over the world that too with great pace. In a distributed OS, various computers are connected
through a single communication channel. These independent computers have their memory unit
and CPU and are known as loosely coupled systems. The system processes can be of different
sizes and can perform different functions. The major benefit of such a type of operating system
is that a user can access files that are not present on his system but in another connected
system. In addition, remote access is available to the systems connected to this network.

5.3. MULTITASKING OS
The multitasking OS is also known as the time-sharing operating system as each task is given
some time so that all the tasks work efficiently. This system provides access to a large number
of users, and each user gets the time of CPU as they get in a single system. The tasks
performed are given by a single user or by different users. The time allotted to execute one task
is called a quantum, and as soon as the time to execute one task is completed, the system
switches over to another task.

5.4. NETWORK OS
Network operating systems are the systems that run on a server and manage all the networking
functions. They allow sharing of various files, applications, printers, security, and other
networking functions over a small network of computers like LAN or any other private network.
In the network OS, all the users are aware of the configurations of every other user within the
network, which is why network operating systems are also known as tightly coupled systems.

5.5. REAL-TIM
E OS
Real-Time operating systems serve real-time systems. These operating systems are useful
when many events occur in a short time or within certain deadlines, such as real-time
simulations.
UNIT 1 PRACTICE

READING COMPREHENSION PRACTICE

LEARNING ABOUT OPERATING SYSTEMS


Taken from: https://www.english4it.com/unit/3/reading

An operating system is a generic term for the multitasking software layer that lets
you perform a wide array of 'lower level tasks' with your computer. By low-level tasks
we mean:

● the ability to sign in with a username and password


● sign out the system and switch users
● format storage devices and set default levels of file compression
● install and upgrade device drivers for new hardware
● install and launch applications such as word processors, games, etc
● set file permissions and hidden files
● terminate misbehaving applications

A computer would be fairly useless without an OS, so today almost all computers come with an
OS pre- installed. Before 1960, every computer model would normally have it's own OS custom
programmed for the specific architecture of the machine's components. Now it is common for an
OS to run on many different hardware configurations.

the heart of an OS is the kernel, which is the lowest level, or core, of the operating system. The
kernel is responsible for all the most basic tasks of an OS such as controlling the file systems
and device drivers. The only lower-level software than the kernel would be the BIOS, which isn't
really a part of the operating system. We discuss the BIOS in more detail in another unit.

The most popular OS today is Microsoft Windows, which has about 85% of the market share for
PCs and about 30% of the market share for servers. But there are different types of Windows
OSs as well. Some common ones still in use are Windows 98, Windows 2000, Windows XP,
Windows Vista, and Windows Server. Each Windows OS is optimized for different users,
hardware configurations, and tasks. For instance Windows 98 would still run on a brand new PC
you might buy today, but it's unlikely Vista would run on PC hardware originally designed to run
Windows 98.

There are many more operating systems out there besides the various versions of Windows,
and each one is optimized to perform some tasks better than others. Free BSD, Solaris, Linux
and Mac OS X are some good examples of non-Windows operating systems.

Geeks often install and run more than one OS an a single computer. This is possible with
dual-booting or by using a virtual machine. Why? The reasons for this are varied and may
include preferring one OS for programming, and another OS for music production, gaming, or
accounting work.
An OS must have at least one kind of user interface. Today there are two major kinds of user
interfaces in use, the command line interface (CLI) and the graphical user interface (GUI).
Right now you are most likely using a GUI interface, but your system probably also contains a
command line interface as well.

Typically speaking, GUIs are intended for general use and CLIs are intended for use by
computer engineers and system administrators. Although some engineers only use GUIs and
some diehard geeks still use a CLI even to type an email or a letter.

Examples of popular operating systems with GUI interfaces include Windows and Mac
OSUnix systems have two popular GUIs as well, known as KDE and Gnome, which run on top
of X- Windows. All three of the above-mentioned operating systems also have built-in CLI
interfaces as well for power users and software engineers. The CLI in Windows is known as
MS-DOS. Today there are two major kinds of user interfaces in use, the command line
interface (CLI) and the graphical user interface (GUI).There are many CLIs for Unix and Linux
operating systems, but the most popular one is called Bash.

In recent years, more and more features are being included in the basic GUI OS install,
including notepads, sound recorders, and even web browsers and games. This is another
example of the concept of 'convergence' which we like to mention.

A great example of an up and coming OS is Ubuntu. Ubuntu is a Linux operating system which
is totally free, and ships with nearly every application you will ever need already installed. Even
a professional quality office suite is included by default. What's more, thousands of free,
ready-to-use applications can be downloaded and installed with a few clicks of the mouse. This
is a revolutionary feature in an OS and can save lots of time, not to mention hundreds or even
thousands of dollars on a single PC. Not surprisingly, Ubuntu's OS market share is growing very
quickly around the world.

As an IT professional, you will probably have to learn and master several, if not all, the popular
operating systems. If you think this sort of thing is fun and interesting, then you have definitely
chosen the right career ;We have learned a little about operating systems in this introduction
and you are ready to do more research on your own. The operating system is the lowest
software layer that a typical user will deal with every day. That is what makes it special and
worth studying in detail.
READ THE TEXT AND CHOOSE THE CORRECT OPTION

TRUE OR FALSE?

1. Multitasking is the generic term for multi variables T F


2. The OS is the shortest software layer that a typical user will deal with every day . T F
3. GUI and Mac OS X. Unix systems have two popular GUIs T F
4. There are two major kinds of user interfaces in use, the command line interface and
the graphical user interface T F
5. The most popular OS today is kernel, which has about 85% of the market share for
PCs and about 30% of the market share for servers. T F
6. At the heart of every operating system is the kernel, which controls the
supply of electricity to the processor. T F
7. Files permissions & hidden files are controlled by file compression. T F
8. Low-level tasks include formatting storage devices & managing device drivers. T F

BIBLIOGRAPHY
https://afteracademy.com/blog/what-is-kernel-in-operating-system-and-what-
are-the-various-types-of-kernel
https://www.mygreatlearning.com/blog/what-is-operating-system/#functions-o
f-operating-systems https://www.youtube.com/watch?v=8kujH0nlgv
UNIT 3
PROGRAMMING LANGUAGE

GLOSSARY
WORD DEFINITION
Low level language this code is written to specific hardware, and will only operate on the hardware
it
was written for and has almost no abstraction from the hardware; reads
machine code and assembly language
APPLICATION A program which makes the computer a useful tool.

Compiler Converts a program to a language that the computer understands.

Scripts In programming, a series of scripts, or sets of steps, are written for a


computer to follow. Computers process the steps line-by-line from top to
bottom. Each step is created by writing a statement.
Compile It is to transform a program written in a high-level programming language
from source code into object code.
• This can be done by using a tool called compiler.
• A compiler reads the whole source code and translates it into a
complete machine code program to perform the required tasks which is
output as a new file.
Code In computer programming, computer code refers to the set of instructions, or a
system of rules, written in a particular programming language
A high-level language (HLL) is a programming language that lets the
High-level language developer write programs irrespective of the nature or type of computer. If a
computer has to understand a high-level language, it should be compiled into
a machine
language.
A low-level language is a language that is very close to machine language
Low-level language and provides a little abstraction of programming concepts. Low-level
languages are closer to the hardware than human languages. The most
common examples of
low-level languages are assembly and machine code.
A markup language is a relatively simple language that consists of
Markup language easily understood keywords and tags, used to format the overall view
of the page and its contents.
ASCII Short for American Standard Code for Information Interexchange,
ASCII is a standard that assigns letters, numbers, and other characters
HTML First developed by Tim Berners-Lee in 1990, HTML is short for
Hypertext Markup Language. HTML is used to create electronic
documents (called pages) that are displayed on the World Wide Web.
Each page contains several
connections to other pages called hyperlinks.

WHAT IS A COMPUTER LANGUAGE?

A programming language is a computer language engineered to create a standard form


of commands. These commands can be interpreted into a code understood by a machine.

Programming is an important engineering tool , It is a process of writing a computer


program using computer language. Computer programs are collections of instructions that
tell a computer hardware and how to process data. Our work would have been very
demanding and time consuming without programming.
A programming language refers also to the means of communication that is used by humans
to instruct computers to perform specified tasks.

A programming language is a computer language programmers use to develop software


programs, scripts, or other sets of instructions for computers to execute.

Although many languages share similarities, each has its own syntax. Once a programmer
learns the languages rules, syntax, and structure, they write the source code in a text editor
or IDE. Then, the programmer often compiles the code into machine language that can be
understood by the computer. Scripting languages, which do not require a compiler, use an
interpreter to execute the script.

A programming language consists of a vocabulary containing a set of grammatical rules


intended to convey instructions to a computer or computing device to perform specific tasks.
Each programming language has an unique set of keywords along with a special syntax to
organize the software’s instructions.
1. TYPES OF PROGRAMMING LANGUAGES
Each of the different programming languages mentioned in the next section can be broken
into one or more of the following types (paradigms) of languages.

1.1. HIGH-LEVEL LANGUAGE (MOST COMMON)

High-level languages, on the other hand, are designed to be easy to read and understand,
allowing programmers to write source code naturally, using logical words and symbols.

Sometimes abbreviated as HLL, a high-level language is a computer programming language


that isn't limited by the computer, designed for a specific job, and is easier to understand. It is
more like human language and less like machine language. However, for a computer to
understand and run a program created with a high-level language, it must be compiled into
machine language.

1.2. LOW-LEVEL LANGUAGE

Low-level languages include assembly and machine languages. An assembly language


contains a list of basic instructions and is much harder to read than a high-level language.
A low-level language is a programming language that provides little or no abstraction of
programming concepts and is very close to writing actual machine instructions. Two
examples of low-level languages are assembly and machine code.

Low-level languages are useful because programs written in them can be crafted to run very
fast and with a very minimal memory footprint. However, they are considered harder to utilize
becausethey require a deeper knowledge of machine language.
2. MAIN FEATURES OF PROGRAMMING LANGUAGES

The features that a programming language must have to stand out are the following:

● Simplicity: the language must offer clear and simple concepts that facilitate learning
and application, in a way that is simple to understand and maintain. Simplicity is a difficult
balance to strike without compromise the overall capability.
● Naturalness: this means that its application in the area for which it was designed
must be done naturally, providing operators, structures and syntax for operators to work
efficiently.
● Abstraction: it is the ability to define and use complicated structures or
operations while ignoring certain low-level details.
● Efficiency: Programming languages must be translated and executed efficiently
so as not to consume too much memory or require too much time.
● Structuring: the language allows programmers to write their code according to
structured programming concepts, to avoid creating errors.
● Compactness: with this characteristic, it is possible to express operations
concisely, without having to write too many details.
● Locality: refers to the code concentrating on the part of the program with which
you are working at a given time.

THE MOST POPULAR PROGRAMMING LANGUAGES


.1.JAVASCRIPT

JavaScript is one of the world’s most popular programming languages on the web. As per
the survey, more than 97 percent of the websites use JavaScript on the client-side of
the webpage.
● It has a well-organized codebase that provides enhanced productivity and readability.
● Easy to learn and is highly in demand.
● Platform independence and greater control of the browser.
● Provide user input validation features.
● The top companies using JavaScript are Microsoft, Uber, PayPal, Google, Walmart, etc.

PYTHON

Python can be regarded as the future of programming languages. As per the latest statistics,
Python is the main coding language for around 80% of developers. The presence of
extensive libraries in Python facilitates artificial intelligence, data science, and machine
learning processes. Currently, Python is trending and can be regarded as the king of
programming languages.
It is one of the most lucrative languages that offers amazing features like:

● Easy to learn and code.


● Extensive libraries and frameworks that support a plethora of applications.
● Incorporated the variants of Java and C like CPython, Jython, etc.
● GUI support.
● Companies working on Python: Intel, Facebook, Spotify, Netflix, etc.

PYTHON ADVANTAGES AND DISADVANTAGES

JAVA

Java is widely utilised in many businesses. It may also be used to make a variety of goods
and has a wide range of uses. It is currently the most widely used programming language, so
it’s pretty worth learning.
Java is an object-oriented programming language that produces software for multiple
platforms. When a programmer writes a Java application, the compiled code (known as
bytecode) runs on most operating systems (OS), including Windows, Linux and Mac OS.

Java is one of the most powerful programming languages that is currently used in more
than 3 billion devices. Java is currently one of the most trending technology. It is used in
desktop applications, mobile applications, web development, Artificial intelligence, cloud
applications, and many more.
Some of the prominent features of Java are:
● Platform independence and Object-oriented programming
● Enhanced productivity, performance, and security
● It is the most secure language

WHAT IS JAVA USED FOR?

Here are some important Java applications:

● It is used for developing Android Apps


● Helps you to create Enterprise Software
● Wide range of Mobile java Applications
● Scientific Computing Applications
● Use for Big Data Analytics
● Java Programming of Hardware devices

C++
C++ has a wide range of applications, and studying it is never a bad thing. It is a very simple
language to pick up and understand. In the industry, it has a wide range of applications.
Along with graphic designs and 3-D models, it’s also employed in games.

C
Although C is out of date in some applications, it is not going away anytime soon. It has a
wide range of real-world applications, and it will continue to be used in the industry for many
years to come.
C#

What is C# used for? Like other general-purpose programming languages, C# can be used to
create a number of different programs and applications: mobile apps, desktop apps,
cloud-based services, websites, enterprise software and games.

Like other general-purpose programming languages, C# can be used to create a number of


different programs and applications: mobile apps, desktop apps, cloud-based services,
websites, enterprise software and games. Lots and lots of games. While C# is remarkably
versatile, there are three areas in which it is most commonly used.
C# is :
​Simple to learn and understand.
​It is fully integrated with .NET libraries.
​The top companies working on C# are Microsoft, Stack Overflow, Accenture, and Alibaba
Travels.

JAVASCRIPT
JavaScript is a widely-used programming language. It is so extensively used that another
programming language may take a long time to replace it. It is also used in artificial intelligence
and other fields, in addition to web development.

RUBY

In today’s world, Ruby is still utilised for a large number of applications. As a result, it’s a great
language to learn because you’ll be able to create complex apps in no time. It also has robust
technology. Therefore it is still relevant today.
PRACTICE
READING COMPREHENSION PRACTICE

COMPUTER LANGUAGES

Unfortunately for us, computers can't understand spoken English or any other natural
language. The only language they can understand directly is machine code, which
consists of 1s and 0s (binary code).

Machine code is too difficult to write. For this reason, we use symbolic languages to
communicate instructions to the computer. For example, assembly languages use
abbreviations such as ADD, SUB, MPY to represent instructions. The program is then
translated into machine code by a piece of software called an assembler. Machine code
and assembly languages are called low-level languages because they are closer to the
hardware. They are quite complex and restricted to particular machines. To make the
programs easier to write, and to overcome the problem of intercommunication between
different types of computer, software developers designed high-level languages, which
are closer to the English language. Here are some examples:

FORTRAN was developed by IBM in 1954 and is still used for scientific and engineering
applications.

COBOL (Common Business Oriented Language) was developed in 1959 and is mainly used for
business applications.

BASIC was developed in the 1960s and was widely used in microcomputer programming
because it was easy to earn. Visual BASIC is a modern version of the old BASIC language,
used to build graphical elements such as buttons and windows in Windows programs.
PASCAL was created in 1971. It is used in universities to teach the fundamentals
ofprogramming.

C was developed in the 1980s at AT&T. It is used to write system software, graphics and
commercial applications. C++ is a version of C which incorporates object-oriented
programming the programmer concentrates on particular things (a piece of text, a graphic or
a table, etc.) and gives each object functions which can be altered without changing the
entire program. For example, to add a new graphics format, the programmer needs to rework
just the graphics object. This makes programs easier to modify.

Java was designed by Sun in 1995 to run on the Web. Java applets provide animation and
interactive features a web pages.
Programs written in high-level languages must be translated into machine code by a compiler
or an interpreter. A compiler translates the source code into object code - that is, it converts
the entire program into machine code in one go. On the other hand, an interpreter translates
the source code line by line as the program is running.
It is important not to confuse programming languages with markup languages,
used to create web documents. Markup languages use instructions, known as
markup tags, to format and link text files. Some examples include:

HTML allows us to describe how information will be displayed on web pages.

XML, which stands for Extensible Markup Language. While HTML uses pre-defined
tags, XMLenables us to define our own tags; it is not limited by a fixed set of tags.

VoiceXML, which makes Web content accessible via voice and phone. VoiceXML is
used tocreate voice applications that run on the phone, whereas HTML is used to
create visual applications (for example, web pages).

<xml>
< name> Andrea Finch </name>

< homework> Write a paragraph describing the C language </homework></xml>

READ THE TEXT AGAIN AND ANSWER THESE QUESTIONS

1. Do computers understand human languages? Why? / Why not?

2. What is the function of an assembler?

3. Why did software developers design high-level languages?

4. Which language is used to teach programming techniques?

5. What is the difference between a compiler and an interpreter?

6. Why are HTML and VoiceXML called markup languages?

BIBLIOGRAPHY
https://www.youtube.com/watch?v=FhrJAi-eHwI
https://www.geeksforgeeks.org/top-10-programming-languages-to-lea
rn-in-2022/
https://www.snhu.edu/about-us/newsroom/stem/what-is-computer-pro
gramming

https://www.computerhope.com/jargon/p/programming-language.htm#types
https://hackr.io/blog/best-programming-languages-to-learn

https://www.chakray.com/programming-languages-types-and-features/
https://www.pluralsight.com/blog/software-development/everything-you-need-to-know-abou
t-c-

UNIT4
ARTIFICIAL INTELLIGENCE

GLOSSARY
WORD CONCEPT
CHATBOT / BOT It is also known as a conversational agent or virtual assistant, is a system
capable of carrying on a dialogue with users based on conversations that have
been scripted upstream. Its role is to respond with maximum relevance to
questions that are frequently asked by internet users, clients or personnel.
Data crunching is the automated analysis of vast amounts of data originating from
DATA CRUNCHING
Big Data.

ARTIFICIAL an automated system capableof analyzing data and making choices


autonomously.
INTELLIGENCE
(AI)

MACHINE Machine Learning is one of the building blocks of artificial intelligence.


LEARNING
term refers to process in which a machine, for example, a chatbot, is endowed
with the capacity to learn automatically.
/AUTOMATIC
LEARNING
ALGORITHM. A set of rules that a machine can follow to learn how to do a task.

AUTONOMOUS: A machine is described as autonomous if it can perform its task or tasks without
needing human intervention.
DATASET: A collection of related data points, usually with a uniform order and tags.

STRONG AI: This field of research is focused on developing AI that is equal to the human mind
when it comes to ability.
WEAK AI: Also called narrow AI, this is a model that has a set range of skills and focuses on
one
particular set of tasks. Most AI currently in use is weak AI, unable to learn or
perform tasks outside of its specialist skill set.
DEEP LEARNING Machine learning technique that teaches computers how to learn by rote (i.e.
machines mimic learning as a human mind would, by using classification
techniques)
Reinforcement Reinforcement learning is an area of machine learning concerned
Learning (RL) with how intelligent agents ought to take actions in an environment in order to
maximize the notion of cumulative reward.
WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

According to Nicole Laskowski, Artificial intelligence is the simulation of human intelligence


processes by machines, especially computer systems. Specific applications of AI include
expert systems, natural language processing, speech recognition and machine vision.

Artificial intelligence (AI) is an area of computer science that involves building smart machines
that are able to perform tasks which usually require human intelligence. Advances in deep
learning and machine learning have allowed AI systems to enter almost every sector in the
tech industries.

Artificial intelligence (AI) refers to the simulation of human intelligence in machines that are
programmed to think like humans and mimic their actions. The term may also be applied to
any machine that exhibits traits associated with a human mind such as learning and problem-
solving.

The ideal characteristic of artificial intelligence is its ability to rationalize and take actions that
have the best chance of achieving a specific goal. A subset of artificial intelligence is machine
learning (ML), which refers to the concept that computer programs can automatically learn from
and adapt to new data without being assisted by humans. Deep learning techniques enable this
automatic learning through the absorption of huge amounts of unstructured data such as text,
images, or video.

Artificial intelligence allows machines to replicate the capabilities of the human mind. From the
development of self-driving cars to the proliferation of smart assistants like Siri and Alexa, AI is
a growing part of everyday life. As a result, many tech companies across various industries
are i Artificial intelligence (AI), also known as machine intelligence, is a branch of computer
science that focuses on building and managing technology that can learn to autonomously
make decisions and carry out actions on behalf of a human being.

AI is not a single technology. It is an umbrella term that includes any type of software or
hardware component that supports machine learning, computer vision, natural language
understanding (NLU) and natural language processing (NLP).

Today’s AI uses conventional CMOS hardware and the same basic algorithmic functions that
drive traditional software. Future generations of AI are expected to inspire new types of brain-
inspired circuits and architectures that can make data-driven decisions faster and more
accurately than a human being .

2. HOW DOES AI WORK?

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the
data for correlations and patterns, and using these patterns to make predictions about future
states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike
exchanges with people, or an image recognition tool can learn to identify and describe objects
in images by reviewing millions of examples.

Artificial intelligence uses machine learning to mimic human intelligence. The computer has
to learn how to respond to certain actions, so it uses algorithms and historical data to create
something called a propensity model.

AI can do much more than this, but those are common uses and functionality for marketing. And
while it might seem like the machines are ready to rise up and take over, humans are still
needed to do much of the work.

Mainly, we use AI to save us time — adding people to email automation and allowing AI to do
much of the work while we work on other tasks.

WHY IS ARTIFICIAL INTELLIGENCE IMPORTANT?

AI is important because it can give enterprises insights into their operations that they may not
have been aware of previously and because, in some cases, AI can perform tasks better than
humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large
numbers of legal documents to ensure relevant fields are filled in properly, AI tools often
complete jobs quickly and with relatively few errors.

ARE THE ADVANTAGES AND DISADVANTAGES OF ARTIFICIAL INTELLIGENCE?

Artificial neural networks and deep learning artificial intelligence technologies are quickly
evolving, primarily because AI processes large amounts of data much faster and makes
predictions more accurately than humanly possible.
While the huge volume of data being created on a daily basis would bury a human researcher,
AI applications that use machine learning can take that data and quickly turn it into actionable
information. As of this writing, the primary disadvantage of using AI is that it is expensive to
process the large amounts of data that AI programming requires.

ADVANTAGES DISADVANTAGES

● Good at detail-oriented jobs; ● Expensive;


● Reduced time for data-heavy tasks; ● Requires deep technical expertise;
● Delivers consistent results; and ● Limited supply of qualified workers
to build AI tools;
● AI-powered virtual agents are
always available. ● Only knows what it's been shown;
and

● Lack of ability to generalize


from one task to another
WHAT ARE THE 4 TYPES OF AI?

Artificial intelligence can be categorized into one of four types.

● Reactive AI uses algorithms to optimize outputs based on a set of inputs. Chess-


playing AIs, for example, are reactive systems that optimize the best strategy to win the game.
Reactive AI tends to be fairly static, unable to learn or adapt to novel situations. Thus, it will
produce the same output given identical inputs.
● Limited memory AI can adapt to past experience or update itself based on new
observations or data. Often, the amount of updating is limited (hence the name), and the
length of memory is relatively short. Autonomous vehicles, for example, can "read the road"
and adapt to novel situations, even "learning" from past experience.
● Theory-of-mind AI are fully-adaptive and have an extensive ability to learn and retain
past experiences. These types of AI include advanced chat-bots that could pass the Turing
Test, fooling a person into believing the AI was a human being. While advanced and
impressive, these AI are not self-aware.
● Self-aware AI, as the name suggests, become sentient and aware of their own
existence. Still in the realm of science fiction, some experts believe that an AI will never
become conscious or "alive".
WHAT ARE EXAMPLES OF AI TECHNOLOGY AND HOW IS IT USED TODAY?

AI is incorporated into a variety of different types of technology. Here are six examples:

​ Automation. When paired with AI technologies, automation tools can expand the
volume and types of tasks performed. An example is robotic process automation
(RPA), a type of software that automates repetitive, rules-based data processing tasks
traditionally done by humans. When combined with machine learning and emerging
AI tools, RPA can automate bigger portions of enterprise jobs, enabling RPA's tactical
bots to pass along intelligence from AI and respond to process changes.

​Machine learning. This is the science of getting a computer to act without programming.
Deep learning is a subset of machine learning that, in very simple terms, can be thought of as
the automation of predictive analytics. There are three types of machine learning algorithms:

o Supervised learning. Data sets are labeled so that patterns can be detected and used
to label new data sets.

o Unsupervised learning. Data sets aren't labeled and are sorted according to
similarities or differences.

o Reinforcement learning. Data sets aren't labeled but, after performing an action

6. WHAT ARE THE APPLICATIONS OF AI?

Artificial intelligence has made its way into a wide variety of markets. Here are SOME examples.

6.1. AI in healthcare. The biggest bets are on improving patient outcomes and reducing
costs. Companies are applying machine learning to make better and faster diagnoses than
humans. One of the best-known healthcare technologies is IBM Watson. It understands
natural language and can respond to questions asked of it.

AI in business. Machine learning algorithms are being integrated into analytics and customer
relationship management (CRM) platforms to uncover information on how to better serve
customers. Chatbots have been incorporated into websites to provide immediate service to
customers. Automation of job positions has also become a talking point among academics and
IT analysts.

AI in education. AI can automate grading, giving educators more time. It can assess students
and adapt to their needs, helping them work at their own pace. AI tutors can provide additional
support to students, ensuring they stay on track. And it could change where and how students
learn, perhaps even replacing some teachers.

AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting


financial institutions. Applications such as these collect personal data and provide financial
advice. Other programs, such as IBM Watson, have been applied to the process of buying a
home. Today, artificial intelligence software performs much of the trading on Wall Street.

7. USE OF ARTIFICIAL INTELLIGENCE ON A PRACTICAL LEVEL

AI is currently being applied to a range of functions both in the lab and in commmercial/
consumer settings, including the following technologies:

● Speech Recognition allows an intelligent system to convert human speech


into text or code.
● Natural Language Processing enables conversational interaction between
humans and computers.
● Computer Vision allows a machine to scan an image and use comparative
analysis to identify objects in the image.
● Machine learning focuses on building algorithmic models that can identify
patterns and relationships in data.
● Expert systems gain knowledge about a specific subject and can solve
problems as accurately as a human expert on this subject.

PRACTICE

TASK 1. READING COMPREHENSION ACTIVITY


THE RETURN OF ARTIFICIAL INTELLIGENCE

A After years in the wilderness, the term 'artificial intelligence' (AI) seems poised to make
a comeback. AI was big in the 1980s but vanished in the 1990s. It re-entered public
consciousness with the release of Al, a movie about a robot boy. This has ignited a
publicdebate about AI, but the term is also being used once more within the computer industry.
Researchers, executives and marketing people are now using the expression without irony or
inverted commas. And it is not always hype. The term is being applied, with some justification,
to products that depend ontechnology that was originally developed by AI researchers.
Admittedly, the rehabilitation of the term has a long way to go, and some firms still prefer to
avoid using it. But the fact that others are starting to use it again suggests that AI has moved
on from being seen as an over-ambitious and under- achieving field of research.

B The field was launched, and the term 'artificial intelligence' coined, at a conference in
1956 by a group of researchers that included Marvin Minsky, John McCarthy, Herbert Simon
and Alan Newell, all of whom went on to become leading figures in the field. The expression
provided an attractive but informative name for a research programme that encompassed such
previously disparate fields as operations research, cybernetics, logic and computer science.
The goal they shared was an attempt to capture or mimic human abilities using machines.
That said, different groups of researchers attacked different problems, from speech recognition
to chess playing, in different ways; AI unified the field in name only. But it was a term that
captured the public imagination.

C Most researchers agree that AI peaked around 1985. A public reared on science-fiction
movies and excited by the growing power of computers had high expectations. For years, AI
researchers had implied that a breakthrough was just around the corner. Marvin Minsky said in
1967 that within a generation the problem of creating' artificial intelligence' would be
substantially solved. Prototypes of medical-diagnosis programs and speech recognition
software appeared to be making progress. It proved to be a false dawn. Thinking computers
and household robots failed to materialize, and a backlash ensued. `There was undue
optimism in the early 1980s; says David Leaky, a researcher at Indiana University. 'Then when
people realized these were hard problems, there was retrenchment. By the late 1980s, the
term AI was being avoided by many researchers, who opted instead to align themselves with
specific sub-disciplines such as neural networks, agent technology, case-based reasoning,
and so on.

D Ironically, in some ways AI was a victim of its own success. Whenever an apparently
mundane problem was solved, such as building a system that could land an aircraft unattended,
the problem was deemed not to have been AI in the first plate. 'If it works, it can't be AI; as Dr
Leaky characterizes it. The effect of repeatedly moving the goal-posts in this way was that AI
came to refer to 'blue-sky' research that was still years away from commercialization.
Researchers joked that AI stood for `almost implemented'. Meanwhile, the technologies that
made it onto the market, such as speech recognition, language translation and decision-support
software, were no longer regarded as AI. Yet all three once fell well within the umbrella of AI
research.

E But the tide may now be turning, according to Dr Leake. HNC Software of San Diego,
backed by a government agency, reckon that their new approach to artificial intelligence is the
most powerful and promising approach ever discovered. HNC claim that their system, based
ona cluster of 30 processors, could be used to spot camouflaged vehicles on a battlefield or
extract a voice signal from a noisy background - tasks humans can do well, but computers
cannot. 'Whether or not their technology lives up to the claims made for it, the fact that HNC
are emphasizing the use of AI is itself an interesting development; says Dr Leaky.
F Another factor that may boost the prospects for AI in the near future is that investors
are now looking for firms using clever technology, rather than just a clever business model, to
differentiate themselves. In particular, the problem of information overload, exacerbated by the
growth of e-mail and the explosion in the number of web pages, means there are plenty of
opportunities for new technologies to help filter and categorize information - classic AI
problems. That may mean that more artificial intelligence companies will start to emerge to
meet this challenge.

G The 1969 film, 2001: A Space Odyssey, featured an intelligent computer called HAL 9000.
As well as understanding and speaking English, HAL could play chess and even learned to lip-
read. HAL thus encapsulated the optimism of the 1960s that intelligent computers would be
widespread by 2001. But 2001 has been and gone, and there is still no sign of a HAL-like
computer. Individual systems can play chess or transcribe speech, but a general theory of
machine intelligence still remains elusive. It may be, however, that the comparison with HAL
no longer seems quite so important, and AI can now be judged by what it can do, rather than
byhow well it matches up to a 30-year-old science-fiction film. 'People are beginning to realize
that there are impressive things that these systems can do; says Dr Leake hopefully.

TASK 2: DO THE FOLLOWING STATEMENTS AGREE WITH THE INFORM ATION GIVEN IN
THE READING? WRITE: TRUE/ FALSE / NOT GIVEN ( If there is no information on this)

1. The researchers who launched the field of AI had worked together on other
projects in thepast. …………..

2. In 1985, AI was at its lowest point. ……….

3. Research into agent technology was more costly than research into neural networks.
……..

4. Applications of AI have already had a degree of success. ………..


5. The problems waiting to be solved by AI have not changed since 1967. ……….

TASK 3: Choose the correct letter, A, B, C or D.

1.According to researchers, in the late 1980s, there was a feeling that


a. a general theory of AI would never be developed.
b. original expectations of AI may not have been justified.
c. a wide range of applications was close to fruition.
d. more powerful computers were the key to further progress.

2. In Dr Leake's opinion, the reputation of AI suffered as a result of


a. changing perceptions.
b. premature implementation.
c. poorly planned projects.
d. commercial pressures.

3. The prospects for AI may benefit from


a. existing AI applications.
b. new business models.
c. orders from Internet-only companies.
d. new investment priorities.
TASK 4: ANSWER THE FOLLOWING QUESTIONS

1. What springs to mind when you hear the term Artificial Intelligence?-

2. What is Artificial intelligence in simple words, give some examples?

3. In your words what are the danger of Artificial Intelligence?

4. Mention some benefits of Artificial intelligence, how is it helping us today?


Give some examples.

5. How can Artificial Intelligence be dangerous?


_

6. Who is known as the -Father of AI"?


7. The main tasks of an AI agent are

8. Weak Artificial Intelligence is……….


___________________________________________________________________________

ARTIFICIAL INTELLIGENCE QUESTIONS PRACTICE 2

1. What is the name for information sent from robot sensors to robot controllers?
a) temperature b) pressure c) feedback d) signal
2. Which of the following terms refers to the rotational motion of a robot arm?
a) swivel b) axle c) retrograde d) roll
3. Which of the following terms IS NOT one of the five basic parts of a robot?
a) peripheral tools b) end effectors c) controller d) drive
4. Decision support programs are designed to help managers make:
a) budget projections b) visual presentations
c) business decisions d) vacation schedules
5. PROLOG is an AI programming language which solves problems with
a form of symbolic logic known as predicate calculus. It was developed in 1972
at the University of Marseilles by a team of specialists. Can you name the
person who headed this team?
a) Alain Colmerauer b) Nicklaus Wirth
c) Seymour Papert d) John McCarthy
6. The number of moveable joints in the base, the arm, and the end
effectors of therobot determines
a) degrees of freedom b) payload capacity c) operational limits
d) flexibility 7.Which of the following places would be LEAST likely to
include operationalrobots?
a) warehouse b) factor c) hospitals d)
private homes 8.For a robot unit to be considered a
functional industrial robot, typically, howmany degrees of freedom would the
robot have?
a) three b) four c) six d) eight
Which of the basic parts of a robot unit would include the computer circuitry
thatcould be programmed to determine what the robot would do?
a) sensor b) controller c) arm d) end effector

BIBLIOGRAPHY
https://www.techopedia.com/definition/32836/robotics
https://www.futurelearn.com/info/courses/begin-robotics/0/steps/2840
https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-I
ntelligence

UNIT 5
ROBOTICS

GLOSSARY

WORD CONCEPT
ACCELEROMETER A device for measuring acceleration or force. These are related by
Newton’s second law: force = mass * acceleration
ACCURACY The precision with which a computed or calculated robot position can
be attained.
ACTUATOR A motor that reads programming signals and translates them into
mechanical movement.
ANDROID A humanoid robot designed to resemble an adult human male. The
‘andro’ prefix is in reference to the assigned masculine gender of the
machine.
CONTROLLER A computer of some type that stores data, executes programs, and
SYSTEM directs the operations of the robot.
CYBORG Shorthand for ‘cybernetic organism’, it is any being that possesses
both biological and artificial parts.
COBOTS robots that interface directly with humans.
END EFFECTOR An end effector is a device at the end of a robotic arm, designed to
interact with the environment, such as our patented suction picker.
CONTROLLER The main device that processes information and carries out
instructions in a robot. Also known as the processor.
CONTROL SYSTEM A method of directing the type of path a robot takes.
JOINT The location at which two or more parts of a robotic arm make
contact. Joints allow parts to move in different directions.
KINEMATICS In robotics, kinematics involves studying the mapping of coordinates
in motion.
PICK AND PLACE The process of picking up an object or part in one location and
placing it in another location
SIMULATOR A software application that creates a virtual world in which robots can
be tested.
ROBOTICS Science of designing, building and applying robots.
GRIPPER Gripper (usually with two fingers) grasping objects of different shape,
mass and material. It is actuated by either pneumatic, hydraulic or
electrical motors. It can be equipped with sensors of force or of proximity
AUTONOMOUS ROBOT Robot with ability to produce and execute its own plan and strategy
of movement.
PROLOG a high-level computer programming language first devised for artificial
intelligence applications.
Active sensor A sensor which instigates an action and then waits for a
response –

such as transmitting a signal and measuring the response when it comes


back.
ACCELERATION A change in velocity (that is changing speed and/or direction of
travel).
BRAIN-COMPUTE A direct connection between a brain and a computer, allowing the brain to
R INTERFACE command the computer or the computer to monitor the brain.
SENSOR A device which is used to measure a quantity – such as the distance to
an object or the speed of a robot.

*IF YOU WANT TO KNOW ABOUT THE HISTORY OF ROBOTICS VISIT


https://www.g2.com/articles/history-of-robots?hsCtaTracking=314fbdf4-
ec3d-40a6-bbdb-5141a44d5781%7C0de54460-a6b2-49cf-9256-4580d2a73c0f

1. WHAT IS ROBOTICS?
Answer the following questions(Answer in your notebook)

Would you like a robot to help you in your daily life? What would you want it to do for you?

Would you prefer your robot to look like a human or a machine?

What are some ways technology can help us in our everyday lives? How can doctors
use technology to helppeople?

Did you ever see a robot? Where did you see it? What did it do?

WHAT DOES ROBOTICS MEAN?

According to the concepts of Technopedia ,Robotics is the engineering and operation of


machines that can autonomously or semi-autonomously perform physical tasks on behalf of a
human. Typically robots perform tasks that are either highly repetitive or too dangerous for a
human to carry out safely. The study of Robots is called Robotics.

Mechanical robots use sensors, actuators and data processing to interact with the physical
world. Someone who makes their living in robotics must have a strong background in
mechanical engineering, electrical engineering and computer programming.

Robotics is an interdisciplinary branch of engineering and science that includes mechanical


engineering, electronic engineering, information engineering, computer science, and others.
Robotics is the engineering and operation of machines that can autonomously or semi-
autonomously perform physical tasks on behalf of a human. Typically robots perform tasks that
are either highly repetitive or too dangerous for a human to carry out safely.
Mechanical robots use sensors, actuators and data processing to interact with the physical
world. Someone who makes their living in robotics must have a strong background in
mechanical engineering, electrical engineering and computer programming.

The field of robotics has greatly advanced with several new general technological
achievements. One is the rise of big data, which offers more opportunity to build programming
capability into robotic systems. Another is the use of new kinds of sensors and connected
devices to monitor environmental aspects like temperature, air pressure, light, motion and
more. All of this serves robotics and the generation of more complex and sophisticated robots
for many uses, including manufacturing, health and safety, and human assistance.

WHAT IS THE FUNCTION OF ROBOTICS?

Robotics is an interdisciplinary research area at the interface of computer science and


engineering. Robotics involves design, construction, operation, and use of robots. The goal
of robotics is to design intelligent machines that can help and assist humans in their
day-to-day lives and keep everyone safe.

2. WHAT IS A ROBOT IN ROBOTICS?

robot is a machine built to carry out a complex task (or set of tasks) by physically moving and
interacting with the world around it. Robots can usually be programmed by a user.

The word “Robot” come from the Czech word “robota” meaning “slavery or forced labour”. It
was first used by Czech writer, Karel Čapek, in his 1921 science-fiction play R.U.R. (Rossum’s
Universal Robots).

Robot is automatically operated machine that replaces human effort, though it may not
resemble human beings in appearance or perform functions in a humanlike manner. By
extension, robotics is the engineering discipline dealing with the design, construction, and
operation of robots.

Robots are programmable machines which are usually able to carry out a series of actions
autonomously, or semi-autonomously.
In my opinion, there are three important factors which constitute a robot:

1. Robots interact with the physical world via sensors and actuators.
2. Robots are programable.
3. Robots are usually autonomous or semi-autonomous.

5. ASPECTS OF ROBOTS
The robots have mechanical construction form, or shape designed to accomplish a
particular task
. They have electrical components which power and control the machinery.

They contain some level of computer program that determines what, when and how a
robot does something

5.1. Legged Locomotion


This type of locomotion consumes more power while demonstrating walk, jump, trot,
hop, climb up or down, etc.
● It requires more number of motors to accomplish a movement. It is suited for rough as
well as smooth terrain where irregular or too smooth surface makes it consume more
power for a wheeled locomotion. It is little difficult to implement because of stability
issues.
It comes with the variety of one, two, four, and six legs. If a robot has multiple legs then
leg coordination is necessary for locomotion.

The total number of possible gaits (a periodic sequence of lift and release events for each of
the total legs) a robot can travel depends up on the number of its legs .If a robot has k legs,
then the number of possible events N = (2k-1)!.

Hence there are six possible


different events:1.
Lifting the Left leg
2. Releasing the Left leg
3. Lifting the Right leg
4. Releasing the Right leg
5. Lifting both the
legs together
Releasing both the
legs together.

In case of k=6 legs, there are 3 9916800 possible events. Hence the complexity of robots is
directly proportional to the number of legs Wheeled Locomotion.
It requires fewer number of motors to accomplish a movement.
It is little easy to implement as there are less stability issues in case of a greater number of
wheels. It is power efficient as compared to legged locomotion.
● Standard wheel: Rotates around the wheel axle and around the
contact

Castor wheel: Rotates around the wheel axe and the offset
steering joint

Swedish 45° and Swedish 90° wheels: Omni wheel, rotates around
the contact point, around he wheel axle, and around the rollers.

●Ball or spherical wheel: Omni directional wheel, technically difficult to implement.

A. COMPONENTS OF A ROBOT

Robots are constructed with the following:


Power Supply: The robots are powered by batteries, solar power, hydraulic, or
pneumaticpower sources.

Actuators They convert energy into movement.

Electric motors (AC/DC): They are required for rotational movement.


Pneumatic Air Muscles: They contract almost 40% when air is sucked in them.
Muscle Wires: They contract by 5% when electric current is passed through
them. Piezo Motors and Ultrasonic Motors: Best for industrial robots.

Sensors: They provide knowledge of real time information on the task Environment. Robots
are equipped with vision sensors to be to compute the depth in the environment.
A tactile sensor imitates the mechanical properties of touch receptors of human fingertips.

2 . Computer Vision:
This is a technology of AI with which the robots can see. The computervision plays vital role in
the domains of safety, security, health, access, and entertainment.
Computer vision automatically extracts s, analyzes, and comprehends useful information from a
single image or an array of images. This process involves development of algorithms to
accomplish automatic visual comprehension.

Hardware of Computer Vision System


This involves:
Power supply
● Image acquisition device such as camera

● a processor
a software

A display device for monitoring the system accessories such as camera


stands, cables, and connector

7. PARTS OF A ROBOT

Robots can be made in surprisingly many ways, using all manner of materials. But most robots
share a great deal in common. Below you will find descriptions of the most common elements
that are used in constructing robots.

7.1. SENSORS
Robot Vision Sensors are what allow a robot to gather information about its environment. This
information can be used to guide the robot's behavior. Some sensors are relatively familiar
pieces of equipment. Cameras allow a robot to construct a visual representation of its
environment. This allows the robot to judge attributes of the environment that can only be
determined by vision, such as shape and color, as well as aid in determining other important
qualities, such as the size and distance of objects.

Microphones allow robots to detect sounds. Sensors such as buttons embedded in bumpers
can allow the robot to determine when it has collided with an object or a wall. Some robots
come equipped with thermometers and barometers to sense temperature and pressure.

Other types of sensors are more complex, and give a robot LIDAR equipped mobile robot
more interesting capabilities. Robots equipped with LIght Detection And Ranging (LIDAR)
sensors use lasers to construct three dimensional maps of their surroundings as they navigate
through the world. Supersonic sensors are a cheaper way to accomplish a similar goal only
using high frequency sound instead of lasers. Finally, some robots are equipped with
specialized sensors such as accelerometers and magnetometers that allow the robot to sense
its movement with respect to the Earth's gravity and magnetic field.

7.2. EFFECTORS

The effectors are the parts of the robot that actually do the work. Effectors can be any
sort of toolthat you can mount on your robot and control with the robot's computer. Most
of the time, the effectors are specific to the tasks that you want your robot to do. For
example, in addition to some of the very common effectors listed below, the Mars rovers
have tools like hammers, shovels, and a mass spectrometer to use in analyzing the soil
of Mars. Obviously, a mail- delivering robot would not need any of those.

End-Effectors are the tools at the end of robotic arms and other robotic appendixes that
directly interact with objects in the world. A "gripper" at the end of a robotic arm is a
common end- effector. Others include spikes, lights, hammers, and screw-drivers.
Medical robots have their own specialized effectors, such as tools for cutting in surgery
and suturing incisions.
Motors can be used for many of the moving parts Servo motor of a robot, from joints on robotic
limbs to wheels on robotic vehicles, to the flaps and propellers on a robotic airplane.
Pneumatics and hydraulics are another way of moving parts of the robot, particularly where
the robot needsa lot of strength to perform a particular task.

Speakers can allow certain robots to talk to us or generate other sounds. Speech is, after all, a
behavior intended to modify the environment, usually by conveying some sort of information to
the people around us.

8. CONTROL SYSTEMS (THE "BRAINS")

A robot's "control system" is that part of the robot that determines the robot's behavior.
A. Pre-Programmed Robots

The very simplest pre-programmed robot merely repeats the same operations over and over.
Such a robot is either insensitive to changes in its environment or it can detect on very limited
information about very limited parts of the environment. Such a robot will require little in the
way of "controls" but it will perform properly only if the environment behaves in accord with the
robot's pre-programmed actions.

9. TYPES OF ROBOTS

Mechanical bots come in all shapes and sizes to efficiently carry out the task for which they
are designed. All robots vary in design, functionality and degree of autonomy. From the 0.2
millimeter-long “RoboBee” to the 200 meter-long .

9.1. Pre-Programmed Robots


Pre-programmed robots operate in a controlled environment where they do simple,
monotonous tasks. An example of a pre-programmed robot would be a mechanical arm on an
automotive assembly line. The arm serves one function — to weld a door on, to insert a
certain part into the engine, etc. — and its job is to perform that task longer, faster and more
efficiently than a human.

9.2. Humanoid Robots


Humanoid robots are robots that look like and/or mimic human behavior. These robots usually
perform human-like activities (like running, jumping and carrying objects), and are sometimes
designed to look like us, even having human faces and expressions. Two of the most
prominent examples of humanoid robots are Hanson Robotics’ Sophia.

9.3. Autonomous Robots

Autonomous robots operate independently of human operators. These robots are usually
designed to carry out tasks in open environments that do not require human supervision. They
are quite unique because they use sensors to perceive the world around them, and then employ
decision-making structures (usually a computer) to take the optimal next step based on their
data and mission. An example of an autonomous robot would be the Roomba vacuum cleaner,
which uses sensors to roam freely throughout a home.

10. ARE ROBOTICS AND ARTIFICIAL INTELLIGENCE THE SAME THING?


The first thing to clarify is that robotics and artificial intelligence are not the same thing at all. In
fact, the two fields are almost entirely separate.

A Venn diagram of the two would look like this:

People sometimes confuse the two because of the overlap between them: Artificially Intelligent
Robots.
1. To understand how these three terms, relate to each other, let's look at each of them
individually. The term robotics was introduced by writer Isaac Asimov. In his science fiction
book, I, Robot, published in 1950, he presented three laws of robotics:
2. A robot may not injure a human being, or, through inaction, allow a human being
to come to harm.

3. A robot must obey the orders given it by human beings except where such
orders would conflict with the First Law.

4. A robot must protect its own existence as long as such protection does not conflict with
theFirst or Second Law.

5. ROBOTICS ENGINEERING

Robotics is an interdisciplinary branch of engineering and science that includes mechanical


engineering, electrical engineering, computer science, and others. Robotics deals with the
design, construction, operation, and use of robots as well as computer system for their control
and information process.

It is the interdisciplinary branch of engineering and science that includes mechanical


engineering, electrical engineering, computer science, and others.

It deals with the design, construction, operation, and use of robots, as well as computer
systemsfor their control, sensory feedback, and information processing.

Robots used in various applications. There are many jobs which humans would rather leave to
robots. The job may be boring, such as domestic cleaning, or dangerous, such as exploring
inside a volcano. Today's robots assist in high precision surgeries such as brain and heart
surgery. They are also used to test quality control in pharmaceuticals.

11. IS ROBOTICS PART OF AI? IS AI PART OF ROBOTICS? WHAT IS THE


DIFFERENCE BETWEEN THE TWO TERMS

Robotics and artificial intelligence serve very different purposes. However, people often get
them mixed up. A lot of people wonder if robotics is a subset of artificial intelligence or if they
are the same thing.
The first thing to clarify is that robotics and artificial intelligence are not the same thing at all. In
fact, the two fields are almost entirely separate.

Robots are aimed at manipulating the objects by perceiving, picking, moving, modifying the
physical properties of object, destroying it, or to have an effect thereby freeing manpower from
doing repetitive functions without getting bored, distracted, or exhausted.

WORK THE FOLLOWING ACTIVITIES


TASK 1: ANSWER BRIEFLY THE FOLLOWING QUESTIONS.

1. What do you understand by the term, robotics?

2. What is Robotic Automation?

3. What are the benefits of Robotic Process Automation?

4. What are the components of a robot?

5. What are the laws of robotics? (Mention and explain)

6. List the name of the areas where robotics can be applied?

7. Why do we use robots in the industry?

8. What is AI? Why do we implement AI in the robots?

9. What are the various types of sensors used in robotics?

10. What is a robot Locomotion?


11. What is an Autonomous robot?

12. What is, “human-robot interaction”

13. How to send information from the robot sensors to the robot controllers?

14. What is the Pneumatic System in robotics?

Read the following passage carefully. Then complete the exercises that
follow.
Read each question carefully. Circle the letter or the number of the correct answer.

1. ASIMO traveled to Edinburgh, Scotland, for the annual Edinburgh International


ScienceFestival. The festival takes place in February every year.

Annual means :

a. scientific. b.international. c.every year.


2. ASIMO is designed to run, climb stairs, and kick a soccer ball. It can even conduct an
orchestra.
a. Designed means:
1. made.
2. performed.
3. climbed.

b. Conduct means:
1. play.
2. lead.
3. perform.

c. Not only . . . but also means:


1. however.
2. except.
3. and.

d. Why were the people amazed by ASIMO?


1. It’s a good conductor.
2. It can play the cello.
3. It’s a robot.

4. Scientists developed robots more than 60 years ago. For many years, robots have
worked in factories. They do uninteresting jobs such as packaging food or assembling cars.
a. Developed means: b. Something uninteresting is …….
1. learned about. 1. dangerous.
2. thought about. 2. boring.
3. made. 3. difficult.

c. Packaging food is boxes.


1. making food for a d. Assembling means
company. 1. putting together.
2. carrying food to a 2. driving.
truck. 3. checking
3. putting food into

TOPICS FOR DISCUSSION AND WRITING


1. Robots can do many different jobs. What jobs do you think robots cannot do? Why
not? Discuss your ideas with your classmates.

2. Robots do many dangerous and boring jobs. Robots also do interesting jobs. For
example, ASIMO can conduct an orchestra. Will people be happy if robots do interesting
jobs for them? Why or why not?

3. What are some of the advantages of having robots work in factories and other places,
such as hospitals and homes for senior citizens? What are some of the disadvantages?

4. Write in your journal. Imagine that you have a robot teacher. Write a letter to a friend,
and describe your robot teacher. Tell your friend about your class. Do you enjoy your robot
teacher? Why or why not?

BIBLIOGRAPHY
https://robotical.io/blog/robot-terminology/
https://www.devopsschool.com/blog/what-is-robotics-and-what-are-the-advantage
s-and- disadvantages-in-detail/
https://www.techopedia.com/definition/32836/ro
botics
https://robotical.io/blog/robot-terminology/
UNIT 6

FUTURE TRENDS OF INFORMATION TECHNOLOGY (IT)

Information systems have evolved at a rapid pace ever since their introduction in the 1950s.
The Internet has made the entire world accessible to us, allowing us to communicate and
collaborate with each other like never before.

Technology today is evolving at a rapid pace, enabling faster change and progress, causing an
acceleration of the rate of change. However, it is not only technology trends and emerging
technologies that are evolving, a lot more has changed this year due to the outbreak of
COVID-19 making IT professionals realize that their role will not stay the same in the
contactless world tomorrow. And an IT professional in 2021-22 will constantly be learning,
unlearning, and relearning (out of necessity if not desire).

Information Technology is the concept involving the development, maintenance, and use of
computer systems, software, and networks for the processing and distribution of data. Often in
the context of a business or other enterprise. IT is considered to be a subset of information
and communications technology (ICT).

In this Unit we present the 9 emerging technology trends that we should watch for and try at in
2023, and possibly secure one of the jobs that will be created by these new technology trends,
that includes:

1. Artificial Intelligence and Machine Learning

2. Robotic Process Automation (RPA)


3. Edge Computing

4. Quantum Computing

5. Virtual Reality and Augmented Reality

6. Blockchain

7. Internet of Things

8. 5G

9. Cybersecurity

1. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

1.1. What Is Artificial Intelligence?

Artificial Intelligence is a method of making a computer, a computer-controlled robot, or a


software think intelligently like the human mind. AI is accomplished by studying the patterns of
the human brain and by analyzing the cognitive process. The outcome of these studies
develops intelligent software and systems.
1.2. How does AI work?
AI systems work by merging large with intelligent, iterative processing algorithms. This
combination allows AI to learn from patterns and features in the analyzed data. Each time an
Artificial Intelligence system performs a round of data processing, it tests and measures its
performance and uses the results to develop additional expertise.

READ THE FOLLOWING TEXT. YOU CAN USE A DICTIONARY IF NECESSARY


Artificial Intelligence

Artificial intelligence is the simulation of human intelligence processes by machines, especially


computer systems. Specific applications of AI include expert systems, natural language
processing, speech recognition and machine vision.

As the hype around AI has accelerated, vendors have been scrambling to promote how their
products and services use AI. Often what they refer to as AI is simply one component of AI,
such as machine learning. AI requires a foundation of specialized hardware and software for
writing and training machine learning algorithms. No one programming language is synonymous
with AI, but a few, including Python, R and Java, are popular.
In general, AI systems work by ingesting large amounts of labeled training data, analyzing the
data for correlations and patterns, and using these patterns to make predictions about future
states. In this way, a chatbot that is fed examples of text chats can learn to produce life like
exchanges with people, or an image recognition tool can learn to identify and describe objects in
images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.


Learning processes. This aspect of AI programming focuses on acquiring data and creating
rules for how to turn the data into actionable information. The rules, which are called algorithms,
provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm
to reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune


algorithms and ensure they provide the most accurate results possible.
AI is important because it can give enterprises insights into their operations that they may not
have been aware of previously and because, in some cases, AI can perform tasks better than
humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large
numbers of legal documents to ensure relevant fields are filled in properly, AI tools often
complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business
opportunities for some larger enterprises. Prior to the current wave of AI, it would have been
hard to imagine using computer software to connect riders to taxis, but today Uber has become
one of the largest companies in the world by doing just that. It utilizes sophisticated machine
learning algorithms to predict when people are likely to need rides in certain areas, which helps
proactively get drivers on the road before they're needed. As another example, Google has
become one of the largest players for a range of online services by using machine learning to
understand how people use their services and then improving them. In 2017, the company's
CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and
gain advantage on their competitors.
Artificial neural networks and deep learning artificial intelligence technologies are quickly
evolving, primarily because AI processes large amounts of data much faster and makes
predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher,
AI applications that use machine learning can take that data and quickly turn it into actionable
information. As of this writing, the primary disadvantage of using AI is that it is expensive to
process the large amounts of data that AI programming requires.

ADVANTAGES
−Good at detail-oriented jobs;
−Reduced time for data-heavy tasks;
−Delivers consistent results; and
−AI-powered virtual agents are always available.

DISADVANTAGES
−Expensive;
−Requires deep technical expertise;
−Limited supply of qualified workers to build AI tools;
−Only knows what it's been shown; and
−Lack of ability to generalize from one task to another. (3500)
[https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence]

4.1. Answer the following questions:

1) How does AI work?


2) Why is artificial intelligence important?
3) What are the advantages and disadvantages of
artificial intelligence?

MACHINE LEARNING
Machine Learning is an application of Artificial Intelligence that enables systems to learn from
vast volumes of data and solve specific problems. It uses computer algorithms that improve
their efficiency automatically through experience.

Taken from: https://www.simplilearn.com/tutorials/machine-learning-tutorial/types-of-machine-learning

Machine learning is a core sub-area of Artificial Intelligence (AI). ML applications learn from
experience (or to be accurate, data) like humans do without direct programming. When
exposed to new data, these applications learn, grow, change, and develop by themselves. In
other words, machine learning involves computers finding insightful information without being
told where to look. Instead, they do this by leveraging algorithms that learn from data in an
iterative process.

As you input more data into a machine, this helps the algorithms teach the computer, thus
improving the delivered results. When you ask Alexa to play your favorite music station on
Amazon Echo, she will go to the station you played most often. You can further improve and
refine your listening experience by telling Alexa to skip songs, adjust the volume, and many
more possible commands.

1.2.1. TYPES OF MACHINE LEARNING

1. Supervised learning is a type of machine learning that uses labeled data to train
machine learning models. In labeled data, the output is already known. The model just
needs to map the inputs to the respective outputs. An example of supervised learning is to
train a system that identifies the image of an animal.

2. Unsupervised learning is a type of machine learning that uses unlabeled data to train
machines. Unlabeled data doesn’t have a fixed output variable. The model learns from the
data, discovers the patterns and features in the data, and returns the output. An example of an
unsupervised learning technique that uses the images of vehicles to classify if it’s a bus or a
truck.

3. Reinforcement Learning trains a machine to take suitable actions and maximize its
rewards in a particular situation. It uses an agent and an environment to produce actions and
rewards. The agent has a start and an end state. But, there might be different paths for
reaching the end state, like a maze. In this learning technique, there is no predefined target
variable. An example of reinforcement learning is to train a machine that can identify the shape
of an object, given a list of different objects. In the example shown, the model tries to predict
the shape of the object, which is a square in this case.

2. ROBOTIC PROCESS OF AUTOMATIZATION /RPA

2.1. WHAT IS RPA?

Robotic Process Automation is the use of software with Artificial Intelligence (AI) and machine
learning (ML) capabilities to handle high-volume, repeatable tasks that previously required
humans to perform. Some of these tasks include:

● Addressing queries

● Making calculations

● Maintaining records

● Making transactions

Traditional automation involves programming application programming interfaces (APIs) and


integration tools to integrate different systems.

2.2. RPA Characteristics


​Rich analytical suite - RPA monitors and manages automated functions from a central
console. This console can be accessed from anywhere and offers basic metrics on robots,
servers, workflows, and more.

​Simple creation of bots - RPA tools enables the quick creation of bots by capturing mouse
clicks and keystrokes with built-in screen recorder components.

​Scriptless automation - RPA tools are code-free and can automate any application in any
department. Users with less programming skills can create bots through an intuitive GUI.

​Security - RPA tools enable the configuration and customization of encryption capabilities to
secure certain data types to defend against the interruption of network communication.

​Hosting and deployment - RPA systems can automatically deploy bots in groups of
hundreds. Hence, RPA bots can be installed on desktops and deployed on servers to access
data for repetitive tasks.

​ ebugging - Some RPA tools need to stop running to rectify the errors while other tools allow
D
dynamic interaction while debugging. This is one of the most powerful features of RPA.

2.3. What RPA is Not?


When you hear the term automation, you might think of a robot doing its job without any human
intervention. And yes, you’re partially right. However, you could have a few misconceptions,
such as:

​ RPA is not a humanoid robot and doesn’t exist physically


​ RPA bots cannot entirely replace humans

​ RPA bots do not possess logical thinking or decision-making skills, which is why they cannot
replicate human cognitive functions.

3. EDGE COMPUTING

Edge Computing has transformed how data from multiple devices is handled, processed, and
delivered across the world. It is a distributed computing framework. It ensures the proximity of
enterprise applications to data sources

Edge computing is the computational processing of sensor data away from the centralized
nodes and close to the logical edge of the network, toward individual sources of data. It
may be referred to as a distributed IT network architecture that enables mobile computing for
data produced locally.

The process of edge computing differs from cloud computing because it takes time, sometimes
up to 2 seconds to relay the information to the centralized data center, delaying the
decision-making process. The signal latency can lead to the organization incurring losses,
hence organizations prefer edge computing to cloud computing.

CLOUD COMPUTING Vs EDGE COMPUTING


​ entralized servers stored in faraway
C ​ Highly distributed and global computing
large-scale data centers. infrastructure, closer to users.

​ ostly and intensive operational activities for


C ​ Automatized scalability, with zero-touch
the company. provisioning.

​ onnectivity, data migration, bandwidth, and


C ​ Less bandwidth requirement and lower
latency features are pretty expensive. latency, improved performance and
reduced operational costs

The main difference between cloud and edge containers is the location. Edge containers are
located at the edge of a network, closer to the data source, while cloud containers
operate in a data center. Organizations that have already implemented containerized cloud
solutions can easily deploy them at the edge.

it’s important to understand that cloud and edge computing are different, non- interchangeable
technologies that cannot replace one another. Edge computing is usedto process
time-sensitive data, while cloud computing is used to process data that is not time-driven.

Cloud computing has become mainstream, with major players AWS (Amazon Web Services),
Microsoft Azure and Google Cloud Platform dominating the market. The adoption of cloud
computing is still growing, as more and more businesses migrate to a cloud solution.

4. QUANTUM COMPUTING

Quantum computers are incredibly powerful machines that take a new approach to processing
information.
Quantum computing is an area of computer science that uses the principles of quantum theory.
Quantum theory explains the behavior of energy and material on the atomic and subatomic
levels.
Quantum computing uses subatomic particles, such as electrons or photons. Quantum bits, or
qubits, allow these particles to exist in more than one state (i.e., 1 and 0) at the same time.

​ Quantum computing uses phenomena in quantum physics to create new ways of computing.
​ Quantum computing involves qubits.

Quantum theory explains the nature and behavior of energy and matter on the
quantum (atomic and subatomic) level. Quantum computing uses a combination of bits to
perform specific computational tasks. All at a much higher efficiency than their classical
counterparts. Development of quantum computers mark a leap forward in computing
capability, with massive performance gains for specific use cases. For example quantum
computing excels at like simulations.

Quantum computing has the capability to sift through huge numbers of possibilities and extract
potential solutions to complex problems and challenges. Where classical computers store
information as bits with either 0s or 1s, quantum computers use qubits.

Quantum computers are different from digital electronic computers based on transistors.
Quantum computation uses quantum bits called qubits.
Compared to traditional computing done by a classical computer, a quantum computer should
be able to store much more information and operate with more efficient algorithms. This
translates to solving extremely complex tasks faster.

4.1. THE ESSENTIAL ELEMENTS OF QUANTUM THEORY:

●Energy, like matter, consists of discrete units; as opposed to a continuous wave.

●Elementary particles of energy and matter, depending on the conditions, may behave like
particles or waves.

●The movement of elementary particles is inherently random, and, thus, unpredictable.

​ The simultaneous measurement of two complementary values -- such as the position and
momentum of a particle -- is flawed. The more precisely one value is measured, the more
flawed the measurement of the other value will be.

USES AND BENEFITS OF QUANTUM COMPUTING


Quantum computers could be used to improve the secure sharing of information. Or to improve
radars and their ability to detect missiles and aircraft. Another area where quantum computing
is expected to help is the environment and keeping water clean with chemical sensors.

These are some potential benefits of quantum computing:

​ Financial institutions may be able to use quantum computing to design more effective and
efficient investment portfolios for retail and institutional clients. They could focus on creating
better trading simulators and improve fraud detection.

​The healthcare industry could use quantum computing to develop new drugs and
genetically-targeted medical care. It could also power more advanced DNA research.
​ For stronger online security, quantum computing can help design better data encryption and
ways to use light signals to detect intruders in the system.
​ Quantum computing can be used to design more efficient, safer aircraft and traffic planning
systems.

FEATURES OF QUANTUM COMPUTING

Superposition and entanglement are two features of quantum physics on which quantum
computing is based. They empower quantum computers to handle operations at speeds
exponentially higher than conventional computers and with much less energy consumption.

SUPERPOSITION

According to IBM, it's what a qubit can do rather than what it is that's remarkable. A qubit
places the quantum information that it contains into a state of superposition. This refers to a
combination of all possible configurations of the qubit. "Groups of qubits in superposition can
create complex, multidimensional computational spaces. Complex problems can be
represented in new ways in these spaces.

ENTANGLEMENT

Entanglement is integral to quantum computing power. Pairs of qubits can be made to


become entangled. This means that the two qubits then exist in a single state. In such a
state, changing one qubit directly affects the other in a manner that's predictable.

Quantum algorithms are designed to take advantage of this relationship to solve complex
problems. While doubling the number of bits in a classical computer doubles its processing
power, adding qubits results in an exponential upswing in computing power and ability.

LIMITATIONS OF QUANTUM COMPUTING

Quantum computing offers enormous potential for developments and problem-solving in many
industries. However, currently, it has its limitations.

​ Decoherence, or decay, can be caused by the slightest disturbance in the


qubit environment. This results in the collapse of computations or errors to
them. As noted above, a quantum computer must be protected from all
external interference during the computing stage.
​ Error correction during the computing stage hasn't been perfected. That
makes computations potentially unreliable. Since qubits aren't digital bits of
data, they can't benefit from conventional error correction solutions used by
classical computers.
​ Retrieving computational results can corrupt the data. Developments such as
a particular database search algorithm that ensures that the act of
measurement will cause the quantum state to decohere into the correct
answer hold promise.
​ Security and quantum cryptography is not yet fully developed.
​ A lack of qubits prevents quantum computers from living up to their potential
for impactful use.
QUANTUM COMPUTER VS. CLASSICAL COMPUTER

Quantum computers have a more basic structure than classical computers. They have no
memory or processor. All a quantum computer uses is a set of superconducting qubits.

Quantum computers and classical computers process information differently. A quantum


computer uses qubits to run multidimensional quantum algorithms. Their processing power
increases exponentially as qubits are added. A classical processor uses bits to operate various
programs. Their power increases linearly as more bits are added. Classical computers have
much less computing power.

BIBLIOGRAPHY

https://www.investopedia.com/terms/q/quantum-computing.asp

5. VIRTUAL REALITY AND AUGMENTED REALITY

The next exceptional technology trend - Virtual Reality (VR) and Augmented Reality (AR), and
Extended Reality (ER). VR immerses the user in an environment while AR enhances their
environment. Although this technology trend has primarily been used for gaming thus far, it
has also been used for training, as with Virtual Ship, a simulation software used to train U.S.
Navy, Army and Coast Guard ship captains.

In 2022, we can expect these forms of technologies being further integrated into our lives.
Usually working in tandem with some of the other emerging technologies we’ve mentioned in
this list, AR and VR have enormous potential in training, entertainment, education, marketing,
and even rehabilitation after an injury.

The three main VR categories are the following:


● Non-Immersive Virtual Reality: This category is often overlooked as VR simply
because it’s so common. Non-immersive VR technology features a computer-generated virtual
environment where the user simultaneously remains aware and controlled by their physical
environment. Video games are a prime example of non-immersive VR.
● Semi-Immersive Virtual Reality: This type of VR provides an experience partially
based in a virtual environment. This type of VR makes sense for educational and training
purposes with graphical computing and large projector systems, such as flight simulators for
pilot trainees.

● Fully Immersive Virtual Reality: Right now, there are no completely immersive VR
technologies, but advances are so swift that they may be right around the corner. This type of
VR generates the most realistic simulation experience, from sight to sound to sometimes even
olfactory sensations. Car racing games are an example of immersive virtual reality that gives
the user the sensation of speed and driving skills. Developed for gaming and other
entertainment purposes, VR use in other sectors is increasing.

● What’s the Difference Between Virtual Reality and Augmented Reality?


● Virtual reality (VR) is an all-enveloping artificial and fully immersive experience that
obscures the natural world.
● Augmented reality (AR) enhances users’ real-world views with digital overlays that
incorporate artificial objects.

● VR creates synthetic environments through sensory stimuli. Users’ actions impact, at


least partially, what occurs in the computer-generated environment. Digital environments reflect
real places and exist apart from current physical reality.

● In AR, the real world is viewed directly or via a device such as a camera to create a
visual and adds to that vision with computer-generated inputs such as still graphics, audio or
video. AR is different from VR because it adds to the real-world experience rather than creating
a new experience from scratch.

PROS AND CONS OF VIRTUAL REALITY PROS


Visual effects seem to be better than reality. Virtual reality technology is used in video games
and does a great job in enhancing the user experience. Users get the sensation that they are
in another world. While using certain gaming controllers and consoles, users may experience
a variety of sensory stimulation, like sound, and tactile feedback, such as touch. The use of
sounds and graphics has been increasingly incorporated into virtual reality and generates
excellent gaming experiences for users. In addition, gaming experiences can feel realistic, and
get one’s heart pumping, as is the case when flying an airplane or fighting zombies.

Virtual reality technology is very useful for people with disabilities because, disabled
people can feel that they can explore the real world without having to physically travel.
Films produced for virtual reality give the audience the possibility of seeing all the
surroundings in every scene; therefore, creating an interactive visual effect for users.
CONS

– High Cost
One of the main cons of virtual reality is that not everyone can afford it. It is very expensive
and people who cannot afford it are left out of this technological world.

– Communication through virtual reality should not replace direct communication.


Another disadvantage of virtual reality is that it cannot be used as a replacement for direct
communication between people. Additionally, this technology is susceptible to deception.

– Feeling of uselessness
Virtual reality users may feel useless as they may get the feeling that they are trying to escape
from the real world.

– Users addicted to the virtual world

Users can become addicted to the virtual world. This addiction can cause various health
related issues. Thus, like anything, it is important to monitor one’s activity.

– The technology is still experimental

Although virtual reality technology is used in various fields, it is still experimental and has
not been developed to its full potential.

– Training in virtual reality environments is not always realistic.

5.1. DIFFERENCE BETWEEN AR, VR AND MR


VIRTUAL REALITY (VR) AUGMENTED REALITY(AR) MIXED REALITY(MR)
Implies a complete immersion
experience that shuts out the Adds digital elements to a live Experience, which combines
physical world. Using VR devices view often by using the camera elements of both AR and VR,
such as HTC Vive, Oculus Rift or on a smartphone. Examples of real-world and digital objects
Google Cardboard, users can be augmented reality experiences interact. Mixed reality
transported into a number of include Snapchat lenses and technology is just now starting
real-world and imagined the game Pokemon Go. to take off with Microsoft’s
environments such as the middle of HoloLens one of the most
a squawking penguin colony or notable early mixed reality
even the back of a dragon. apparatuses.
6. BLOCKCHAIN

Blockchain technology is a digital database where information is grouped in encrypted blocks,


which are interlinked with other blocks that allow transactions to be carried out. It offers greatly
improved security and lower intermediation costs, as it allows transactions to be carried out
between users located anywhere in the world.
Blockchain is a system of recording information in a way that makes it difficult or impossible to
change, hack, or cheat the system. A blockchain is essentially a digital ledger of transactions
that is duplicated and distributed across the entire network of computer systems on the
blockchain.
Can the blockchain be hacked?
If a security flaw exists on the blockchain network where a smart contract operates, hackers
may be able to steal money from users without being detected because the fraudulent activity
is not reflected. If the security practices surrounding the exchanges are weak, hackers will
have easier access to data

6.1. ADVANTAGES AND DISADVANTAGES OF BLOCKCHAIN


6.1.1. Advantages of Blockchain
Decentralization. This is the main feature of blockchain technology, and the strong point is
that to authenticate transactions or operations no other instance is required to act as an
intermediary, reducing transaction validation times.

Network distribution. This point provides, at the same time, several benefits since, by having
this network distributed, in the first instance, no one owns the network, allowing different
users to always have multiple copies of the same information.

Low costs for users. The decentralized nature of Blockchain, allows for the validation of
person-to-person transactions quickly and securely. Eliminating the need for an intermediary
reduces costs for users.

6.1.2. Disadvantages of Blockchain

High implementation costs. Just as this technology represents low costs for users,
unfortunately, it also implies high implementation costs for companies, which delays its
mass adoption and implementation.

7. INTERNET OF THINGS

IoT: The Internet of Things. When the Internet became commonplace, we were all connected
as an Internet of people. That has been life-changing. But it’s about to change all over again.
Soon it will be our devices (and cars and phones and appliances and more) that are
connected, not us, and this shift is going to turn our world upside down—in a very good way,
according to most experts. Some predict the changes will be so extreme, IoT will lead to the
next Industrial Revolution.

it’s “the interconnection via the Internet of computing devices embedded in everyday objects,
enabling them to send and receive data.” At a consumer level, these devices can be placed in
our cars, phones, appliances, medical equipment, wristbands, livestock and more. At an
industrial level, these devices can be in machinery, shipping equipment, vehicles, robots,
warehouses and more. But where the devices are located matters less than what they do. And
what they do is “talk” to each other, sharing data and getting feedback based on that data and
all the other data that is being generated, analyzed and acted on.

The Internet of Things (IoT) describes the network of physical objects—“things”—that are
embedded with sensors, software, and other technologies for the purpose of connecting and
exchanging data with other devices and systems over the internet. These devices range from
ordinary household objects to sophisticated industrial tools. With more than 7 billion connected
IoT devices today, experts are expecting this number to grow to 10 billion by 2020 and 22
billion by 2025

Why is Internet of Things (IoT) so important?


Over the past few years, IoT has become one of the most important technologies of the 21st
century. Now that we can connect everyday objects—kitchen appliances, cars, thermostats,
baby monitors—to the internet via embedded devices, seamless communication is possible
between people, processes, and things.

What technologies have made IoT possible?

While the idea of IoT has been in existence for a long time, a collection of recent advances in
a number of different technologies has made it practical.

● Access to low-cost, low-power sensor technology. Affordable and reliable sensors


are making IoT technology possible for more manufacturers.
● Connectivity. A host of network protocols for the internet has made it easy to connect
sensors to the cloud and to other “things” for efficient data transfer.
● Cloud computing platforms. The increase in the availability of cloud platforms
enables both businesses and consumers to access the infrastructure they need to scale up
without actually having to manage it all.
● Machine learning and analytics. With advances in machine learning and analytics,
along with access to varied and vast amounts of data stored in the cloud, businesses can
gather insights faster and more easily. The emergence of these allied technologies continues
to push the boundaries of IoT and the data produced by IoT also feeds these technologies.
● Conversational artificial intelligence (AI). Advances in neural networks have brought
natural-language processing (NLP) to IoT devices (such as digital personal assistants Alexa,
Cortana, and Siri) and made them appealing, affordable, and viable for home use

WHAT IS INDUSTRIAL IOT?


Industrial IoT (IIoT) refers to the application of IoT technology in industrial settings, especially
with respect to instrumentation and control of sensors and devices that engage cloud
technologies.

https://www.insiderintelligence.com/insights/internet-of-things-devices-examples/
8. 5 G TECHNOLOGY

5g will become even more widespread, and we will start to see operators launching 5g
stand-alone networks, delivering even more incredible speeds and quality of service to
consumers.

5G is next generation wireless network technology that’s expected to change the way people
live and work. It will be faster and able to handle more connected devices than the existing 4G
LTE network, improvements that will enable a wave of new kinds of tech products.
5G is the 5th generation mobile network. It is a new global wireless standard after 1G, 2G, 3G,
and 4G networks. 5G enables a new kind of network that is designed to connect virtually
everyone and everything together including machines, objects, and devices.

5G wireless technology is meant to deliver higher multi-Gbps peak data speeds, ultra- low
latency, more reliability, massive network capacity, increased availability, and a more
uniform user experience to more users. Higher performance and improved efficiency
empower new user experiences and connects new industries.

8.1. Why 5G?


Companies are racing to have the fastest or largest 5G networks. And countries are ompeting to
be the first to deploy fully functional, nationwide 5G. That’s because the benefits of the new

technology are expected to fuel transformative new technologies, not just for consumers but
also for businesses, infrastructure and defense applications.

8.2. How fast is 5G?

5G can be significantly faster than 4G, delivering up to 20 Gigabits-per-second (Gbps) peak


data rates and 100+ Megabits-per-second (Mbps) average data rates. 5G has more capacity
than 4G. 5G is designed to support a 100x increase in traffic capacity and network efficiency.

8.3. Benefits of 5G?

Much of the hype around 5G has to do with speed. But there are other perks, too. 5G will have
greater bandwidth, meaning it can handle many more connected devices than previous
networks. That means no more spotty service when you’re in a crowded area. And it will enable
even more connected devices like smart toothbrushes and self-driving cars.
8.4. How does it work?
With 5G, signals run over new radio frequencies, which requires updating radios and other
equipment on cell towers. There are three different methods for building a 5G
network,depending on the type of assets a wireless carrier has: low-band network (wide
coverage area but only about 20% faster than 4G), high-band network (superfast speeds but
signals don’t travel well and struggle to move through hard surfaces) and mid-band network
(balances speed and coverage).

8.5. ADVANTAGES AND DISADVANTAGES OF 5G TECHNOLOGY

8.5.1. Advantages of 5G technology


● Higher Download Speed. The 5G network will have the capacity to increase
download speeds by up to 20 times (from 200 Mbps (4G) to 10 Gbps (5G)) and decreasing
latency (response time between devices). These speeds will maximize the browsing
experience by facilitating processes that, although possible today, still present difficulties.
● Hyperconnectivity. The 5G network promises the possibility of having a
hyper-interconnected environment to reach the point of having the much desired “smart
cities”. The correct performance of these new dynamics will depend on the bandwidth of 5G
and the Internet of Things (IoT).
● Process optimization. It is also expected to revolutionize areas such as medicine
(remote operations, for example), and traffic management and autonomous vehicles, as well
as its implementation in the construction sector to optimize resources and reduce risks.

8.5.2. Disadvantages of 5G technology


● Immediate Obsolescence. The transition to the 5G network will require devices that
can support it; current 4G devices do not have this capability and will become immediately
obsolete.
● Technological exclusion. The implementation of the 5G network also implies a lack of
immediate accessibility for average pockets, combined with a delay in its implementation due
to a lack of means for its use.

● Insufficient Infrastructure. For the 5G network to function properly will require a


whole ambitious investment in infrastructure to increase bandwidth and expand coverage,
and this is not cheap. This situation will necessarily lead to delays in its implementation due to
the high costs that governments will have to cover for 5G to function properly

● Risks in security and proper data handling. All of this requires optimal data
management, and this is where the most conflictive part of the advantages versus
disadvantages lies. And the fact is that, in the management of all this information,
both from companies and individuals and even governments, not only issues such as
Big Data techniques are involved in its study.

BIBLIOGRAPHY
https://www.techtarget.com/searchnetworking/definition/5G
https://www.verizon.com/about/our-company/5g/what-5g
https://www.gomultilink.com/blog/multilog/the-pros-cons-and-pote
ntials-of-5g
https://whatsag.com/5g/5g-advantages_disadvantages.php
9. CYBERSECURITY

Cybersecurity is the protection of internet-connected systems such as hardware, software and


data from cyberthreats

Cybersecurity is a set of processes, tools and frameworks to protect networks, devices,


programs and data from cyberattacks. Cybercriminals launch such attacks to gain unauthorized
access to IT systems, interrupt business operations, modify, manipulate or steal data, engage in
corporate espionage, or extort money from victims.

9.1. Why Do We Need Cybersecurity?

Cybercrime is an increasingly serious problem, and to address it, strong cybersecurity is critical.

Individuals, governments, for-profit companies, not-for-profit organizations, and educational


institutions are all at risk of cyberattacks and data breaches. In the future, the number of attacks
will grow as digital technologies evolve, the number of devices and users increase, global
supply chains become more complex, and data becomes more critical in the digital economy. To
minimize the risk of an attack and to secure systems and data, strong cybersecurity will be vital.
In today’s connected world, everyone benefits from advanced cyberdefense programs. At an
individual level, a cybersecurity attack can result in everything from identity theft, to extortion
attempts, to the loss of important data like family photos. Everyone relies on critical
infrastructure like power plants, hospitals, and financial service companies. Securing these and
other organizations is essential to keeping our society functioning.

Everyone also benefits from the work of cyberthreat researchers, like the team of 250 threat
researchers at Talos, who investigate new and emerging threats and cyber attack strategies.
They reveal new vulnerabilities, educate the public on the importance of cybersecurity, and
strengthen open source tools. Their work makes the Internet safer for everyone.

9.2. Difference Between Cybersecurity and Information Security

IT security is the practice of protecting IT assets, such as endpoints, databases, servers,


networks, and data from unauthorized access in order to prevent misuse or theft. It is an
overarching process that is concerned with how enterprise data is handled on a day-to-day
basis. These attacks may come from inside or outside an organization. Information security
refers to protecting the confidentiality, integrity and availability of data by preventing
unauthorized access, modification, manipulation, or destruction.

Cybersecurity is a “subset” of IT security. It deals with protecting assets from hacks or


cyberattacks, i.e. threats originating from or via the Internet.

Cybersecurity is the state or process of protecting and recovering computer systems, networks,
devices, and programs from any type of cyber attack. Cyber-attacks are an increasingly
sophisticated and evolving danger to your sensitive data, as attackers employ new methods
powered by social engineering and artificial intelligence (AI) to circumvent traditional data
security controls.
9.3. Advantages and Disadvantages of Cyber Security

9.3.1. Advantages:

1) Protects system against viruses, worms, spyware and other unwanted programs.

2) Protection against data from theft.

3) Protects the computer from being hacked.

4) Minimizes computer freezing and crashes.

5) Gives privacy to users

9.3.2. Disadvantages:
1) Firewalls can be difficult to configure correctly.

2) Incorrectly configured firewalls may block users from performing certain actions on the
Internet, until the firewall configured correctly.
3) Makes the system slower than before.

4) Need to keep updating the new software in order to keep security up to date.

5) Could be costly for average user.

9.4. TYPES OF CYBERSECURITY Network Security

This type of security refers to the protection of your computer network from attacks inside and
outside of the network. It employs numerous different techniques to prevent malicious software
or other data breaches from occurring. Network security uses many different protocols to block
attacks but allows authorized user access to the secure network.
One of the most important layers to secure your network is a firewall, which acts as a protective
barrier between your network and external, untrusted network connections. A firewall can block
and allow traffic to a network based on security settings.

Since phishing attacks are the most common form of cyberattack, email security is the most
important factor in creating a secure network. Email security might consist of a program
designed to scan incoming and outgoing messages to monitor for potential phishing attacks.

Application Security
This is the process of protecting sensitive information at the app-level. Most of these
security measures should be implemented before the application is deployed. Application
security might involve tactics like requiring a strong password from the user.
Cloud Security
Most of our online life is stored in the cloud. To be honest, I haven’t saved anything to my
personal hard drive in quite some time. Most people use online systems such as Google
Drive, Microsoft OneDrive, and Apple iCloud for storage. It is important for these platforms
to remain secure at all times due to the massive amounts of data stored on them.
Operational Security
This term refers to the risk management process for all internal cybersecurity. This type of
management usually employs a number of risk management officers to ensure there is a
backup plan in place if a user’s data becomes compromised. Operational security
includes ensuring that employees are educated on the best practices for keeping personal
and business information secure.

NANOTECHNOLOGY
Nanotechnology is the understanding and control of materials on the molecular, atomic, or
even subatomic scale. Nanotechnology allowed scientists and engineers to create the
nanotubes on which this ladybug is walking. Carbon nanotubes are stronger than steel
and more flexible than rubber.

INVESTIGATION WORK
BIBLIOGRAPHY
https://www.oracle.com/internet-of-things/what-is-iot/
https://www.cisco.com/c/en/us/products/security/what-is-cybersecurity.html
https://www.techtarget.com/searchsecurity/definition/cybersecurity
https://darktrace.com/blog/the-future-of-cyber-security-2022-predictions-from-darktrace
https://www.techtarget.com/searchsecurity/definition/cybersecurity
https://darktrace.com/blog/the-future-of-cyber-security-2022-predictions-from-darktrace
GENERAL BIBLIOGRAPHY

https://www.cisco.com/c/en/us/products/security/what-is-cybersecurity.html#~how-
cybersecurity-works
https://afteracademy.com/blog/what-is-kernel-in-operating-system-and-what-are-the-various-
types-of-kernel
https://www.mygreatlearning.com/blog/what-is-operating-system/#functions-of-operating-
systems
https://www.youtube.com/watch?v=8kujH0nlgv
https://www.techtarget.com/searchnetworking/definition/protocol
https://www.computerhope.com/jargon/n/network.htm
https://www.heavy.ai/technical-glossary/network-topology
https://www.elprocus.com/what-are-network-devices-and-their-types/
https://www.techtarget.com/searchnetworking/definition/network-topology
https://www.geeksforgeeks.org/types-of-network-topology/
https://www.youtube.com/watch?v=znIjk-7ZuqI
https://www.youtube.com/watch?v=614QGgw_FA4
https://www.youtube.com/watch?v=FhrJAi-eHwI
https://www.geeksforgeeks.org/top-10-programming-languages-to-learn-in-2022/
https://www.snhu.edu/about-us/newsroom/stem/what-is-computer-programming
https://www.computerhope.com/jargon/p/programming-language.htm#types
https://hackr.io/blog/best-programming-languages-to-learn
https://www.chakray.com/programming-languages-types-and-features/
https://www.pluralsight.com/blog/software-development/everything-you-need-to-know-about-c-
https://www.techopedia.cm/definition/32836/robotics
https://www.futurelearn.com/info/courses/begin-robotics/0/steps/2840
https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence
https://www.techopedia.com/definition/32836/robotics
https://www.futurelearn.com/info/courses/begin-robotics/0/steps/2840
https://www.techtarget.com/searchenterpriseai/definition/AI-Artificial-Intelligence
https://www.techtarget.com/searchnetworking/definition/5G
https://www.verizon.com/about/our-company/5g/what-5g
https://www.gomultilink.com/blog/multilog/the-pros-cons-and-potentials-of-5g
https://whatsag.com/5g/5g-advantages_disadvantages.php

También podría gustarte