B.TECH
(SEM I) THEORY EXAMINATION 2020-21
ARTIFICIAL INTELLIGENCE
FOR ENGINEERS
Time: 3 Hours Total Marks: 100
Note: 1. Attempt all Sections. If require any missing data; then choose suitably.
SECTION A
1. Attempt all questions in brief.
Q no. Question Marks CO
(a) . What is meant by ethical approach?
Ans-: The virtue approach to ethics assumes that there are certain ideals toward which we should strive, which provide for the full development of our humanity. These ideals are discovered through thoughtful reflection on what kind of people we have the potential to become.
(b) . Define AI.
Ans-: Artificial intelligence (AI) is the ability of a computer or a robot controlled by a computer to do tasks that are usually done by humans because they require human intelligence and discernment.
(c) . Differentiate between data and information.
Ans-: The Key Differences Between Data vs Information
Data is a collection of facts, while information puts those facts into context. While data is raw and unorganized, information is organized. Data points are individual and sometimes unrelated.
(d) . What is data clustering?
- Voice search. This is, arguably, the most common use of speech recognition. ...
- Voice to text. Speech recognition enables hands free computing. ...
(f) What is parsing in NLP?
(i) List any 2 uses of computer vision technology.
(j) Define pixel.
SECTION B
Everything about the evolution of artificial intelligence
Artificial Intelligence has grown into a formidable tool in recent years allowing robots to think and act like humans. Furthermore, it has attracted the attention of IT firms all around the world and is seen as the next major technological revolution following the growth of mobile and cloud platforms. It’s even been dubbed the “4th industrial revolution” by some. Researchers have developed software that uses Darwinian evolution ideas, such as “survival of the fittest,” to construct AI algorithms that improve generation to generation with no need for human intervention. The computer was able to recreate decades of AI research in only a few days, and its creators believe that one day it will be able to find new AI techniques.
AI in The Future
It has been suggested that we are on the verge of the 4th Industrial Revolution, which will be unlike any of the previous three. From steam and water power through electricity and manufacturing process, computerization, and now, the question of what it is to be human is being challenged.
Smarter technology in our factories and workplaces, as well as linked equipment that will communicate, view the entire production process, and make autonomous choices, are just a few of the methods the Industrial Revolution will lead to business improvements. One of the most significant benefits of the 4th Industrial Revolution is the ability to improve the world’s populace’s quality of life and increase income levels. As robots, humans, and smart devices work on improving supply chains and warehousing, our businesses and organizations are becoming “smarter” and more productive.
(b) Discuss different stages of data processing.
Ans-: Data Processing
Data processing occurs when data is collected and translated into usable information. Usually performed by a data scientist or team of data scientists, it is important for data processing to be done correctly as not to negatively affect the end product, or data output.
Data processing starts with data in its raw form and converts it into a more readable format (graphs, documents, etc.), giving it the form and context necessary to be interpreted by computers and utilized by employees throughout an organization.
Six stages of data processing
1. Data collection
Collecting data is the first step in data processing. Data is pulled from available sources, including data lakes and data warehouses. It is important that the data sources available are trustworthy and well-built so the data collected (and later used as information) is of the highest possible quality.
2. Data preparation
Once the data is collected, it then enters the data preparation stage. Data preparation, often referred to as “pre-processing” is the stage at which raw data is cleaned up and organized for the following stage of data processing. During preparation, raw data is diligently checked for any errors. The purpose of this step is to eliminate bad data (redundant, incomplete, or incorrect data) and begin to create high-quality data for the best business intelligence.
3. Data input
The clean data is then entered into its destination (perhaps a CRM like Salesforce or a data warehouse like Redshift), and translated into a language that it can understand. Data input is the first stage in which raw data begins to take the form of usable information.
4. Processing
During this stage, the data inputted to the computer in the previous stage is actually processed for interpretation. Processing is done using machine learning algorithms, though the process itself may vary slightly depending on the source of data being processed (data lakes, social networks, connected devices etc.) and its intended use (examining advertising patterns, medical diagnosis from connected devices, determining customer needs, etc.).
5. Data output/interpretation
The output/interpretation stage is the stage at which data is finally usable to non-data scientists. It is translated, readable, and often in the form of graphs, videos, images, plain text, etc.). Members of the company or institution can now begin to self-serve the data for their own data analytics projects.
6. Data storage
The final stage of data processing is storage. After all of the data is processed, it is then stored for future use. While some information may be put to use immediately, much of it will serve a purpose later on. Plus, properly stored data is a necessity for compliance with data protection legislation like GDPR. When data is properly stored, it can be quickly and easily accessed by members of the organization when needed.
(c) . Explain the speech recognition system in detail.
Ans-:
Speech recognition, also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text, is a capability which enables a program to process human speech into a written format. While it’s commonly confused with voice recognition, speech recognition focuses on the translation of speech from a verbal format to a text one whereas voice recognition just seeks to identify an individual user’s voice.
IBM has had a prominent role within speech recognition since its inception, releasing of “Shoebox” in 1962. This machine had the ability to recognize 16 different words, advancing the initial work from Bell Labs from the 1950s. However, IBM didn’t stop there, but continued to innovate over the years, launching VoiceType Simply Speaking application in 1996. This speech recognition software had a 42,000-word vocabulary, supported English and Spanish, and included a spelling dictionary of 100,000 words. While speech technology had a limited vocabulary in the early days, it is utilized in a wide number of industries today, such as automotive, technology, and healthcare. Its adoption has only continued to accelerate in recent years due to advancements in deep learning and big data. Research (link resides outside IBM) shows that this market is expected to be worth $24.9 billion by 2025.
Key features of effective speech recognition
Many speech recognition applications and devices are available, but the more advanced solutions use AI and machine learning. They integrate grammar, syntax, structure, and composition of audio and voice signals to understand and process human speech. Ideally, they learn as they go — evolving responses with each interaction.
The best kind of systems also allow organizations to customize and adapt the technology to their specific requirements — everything from language and nuances of speech to brand recognition. For example:
- Language weighting: Improve precision by weighting specific words that are spoken frequently (such as product names or industry jargon), beyond terms already in the base vocabulary.
- Speaker labeling: Output a transcription that cites or tags each speaker’s contributions to a multi-participant conversation.
- Acoustics training: Attend to the acoustical side of the business. Train the system to adapt to an acoustic environment (like the ambient noise in a call center) and speaker styles (like voice pitch, volume and pace).
- Profanity filtering: Use filters to identify certain words or phrases and sanitize speech output.
Meanwhile, speech recognition continues to advance. Companies, like IBM, are making inroads in several areas, the better to improve human and machine interaction.
(d) . Explain The Universal Approximation Theorem.
Ans-:
The Universal Approximation Theorem
Mathematically speaking, any neural network architecture aims at finding any mathematical function y= f(x) that can map attributes(x) to output(y). The accuracy of this function i.e. mapping differs depending on the distribution of the dataset and the architecture of the network employed. The function f(x) can be arbitrarily complex. The Universal Approximation Theorem tells us that Neural Networks has a kind of universality i.e. no matter what f(x) is, there is a network that can approximately approach the result and do the job! This result holds for any number of inputs and outputs.
If we observe the neural network above, considering the input attributes provided as height and width, our job is to predict the gender of the person. If we exclude all the activation layers from the above network, we realize that h₁ is a linear function of both weight and height with parameters w₁, w₂, and the bias term b₁. Therefore mathematically,
h₁ = w₁*weight + w₂*height + b₁
Similarily,
h2 = w₃*weight + w₄*height + b₂
Going along these lines we realize that o1 is also a linear function of h₁ and h2, and therefore depends linearly on input attributes weight and height as well. This essentially boils down to a linear regression model. Does a linear function suffice at approaching the Universal Approximation Theorem? The answer is NO. This is where activation layers come into play.
An activation layer is applied right after a linear layer in the Neural Network to provide non-linearities. Non-linearities help Neural Networks perform more complex tasks. An activation layer operates on activations (h₁, h2 in this case) and modifies them according to the activation function provided for that particular activation layer. Activation functions are generally non-linear except for the identity function. Some commonly used activation functions are ReLu, sigmoid, softmax, etc. With the introduction of non-linearities along with linear terms, it becomes possible for a neural network to model any given function approximately on having appropriate parameters(w₁, w₂, b₁, etc in this case). The parameters converge to appropriateness on training suitably. You can get better acquainted mathematically with the Universal
(e) . Discuss the advantages & challenges of face recognition system
Ans-:
Advantage of Face Recognition:
1. Genetic Disorder Identification: There are
healthcare apps such as Face2Gene and software
like Deep Gestalt that uses facial recognition to
detect a genetic disorder. This face is then analyzed
and matched with the existing database of disorders.
2. Airline Industry: Some airlines use facial recognition
to identify passengers. This face scanner would help
saving time and to prevent the hassle of keeping
track of a ticket.
3. Hospital Security: Facial recognition can be used in
hospitals to keep a record of the patients that is far
better than keeping records and finding their names,
address.
4. Detection of emotions and sentiments: It can be
used to detect emotions which patients exhibit
during their stay in the hospital and analyze the
data to determine how they are feeling.
Problems and Challenges
The face recognition technology is facing several
challenges
1. Pose: A Face Recognition System can tolerate cases
with small rotation angles, but it becomes difficult
to detect if the angle would be large.
2. Expressions: Because of the emotions, human
mood varies and results in different expressions.
With these facial expressions, the machine could
make mistakes to find the correct person identity.
3. Aging: With time and age face changes it is unique
and does not remain rigid due to which it may be
difficult to identify a person who is now 60 years old.
4. Identify similar faces: Different persons may have
similar appearance that sometimes makes it
impossible to distinguish.
Presented By Anoop Pal
Hello Freinds
ReplyDeleteThis is very informational article. But AlphaBOLD is a leading Quality Assurance company based in Carlsbad, USA We offer a range of software Quality Assurance testing Services and consulting services to companies worldwide.
ReplyDeleteThis is very informational article. But AlphaBOLD is a leading Quality Assurance company based in Carlsbad, USA We offer a range of software Quality Assurance testing Services and consulting services to companies worldwide.
ReplyDeleteimport turtle
ReplyDeletefrom time import sleep
# Part 1 : Initialize the module
t = turtle.Turtle()
t.speed(4)
turtle.bgcolor("white")
t.color("white")
turtle.title('Netflix Logo')
# Part 2 : Drawing the black background
t.up()
t.goto(-80, 50)
t.down()
t.fillcolor("brown")
t.begin_fill()
t.forward(200)
t.setheading(270)
s = 360
for i in range(9):
s = s - 10
t.setheading(s)
t.forward(10)
t.forward(180)
s = 270
for i in range(9):
s = s - 10
t.setheading(s)
t.forward(10)
t.forward(200)
s = 180
for i in range(9):
s = s - 10
t.setheading(s)
t.forward(10)
t.forward(180)
s = 90
for i in range(9):
s = s - 10
t.setheading(s)
t.forward(10)
t.forward(30)
t.end_fill()
# Part 3 : Drawing the N shape
t.up()
t.color("blue")
t.setheading(270)
t.forward(240)
t.setheading(0)
t.down()
t.color("red")
t.fillcolor("#E50914")
t.begin_fill()
t.forward(30)
t.setheading(90)
t.forward(180)
t.setheading(180)
t.forward(30)
t.setheading(270)
t.forward(180)
t.end_fill()
t.setheading(0)
t.up()
t.forward(75)
t.down()
t.color("green
t.hideturtle()
#include
ReplyDeleteint Fibseries(int);
int main()
{
int Numi = 0,j;
printf("\nPlease Enter which you want to print:");
scanf("%d",&Numi);
printf("Fibseries\n");
for(j=0;j<=Numi;j++)
{
printf("%d\t",Fibseries(j));
}
return 0;
}
int FibSeries(int Num)
{
if(Num==0)
return 0;
else if (Num==1)
return 1;
else
return(Fibseries(Num - 1) + FibSeries(Num -2));
}