- About Us
- Consultancy Services
- Customer Solutions
- Operational Model
2019, Dec 2
“A.I. is the new electricity.” – Andrew Ng
Artificial Intelligence (AI) is a concept first presented by John McCarthy et. al. at the famous Darthmouth Conference in 1956; but the journey of imagining and producing thinking machines began much earlier. For example, Claude E. Shannon, also a member of the organizing committee for the Darthmouth Conference, had already published “Programming a Computer for Playing Chess” (1949) and Turing has already written “Computing Machinery and Intelligence” (1950), famously known as the Turing Test. The rapid pace of advances in the field up until 1960 results in extremely optimistic declarations: Herbert A. Simon claims that in twenty years AI will be capable of doing anything a human does, whereas Marvin Minsky says that the problem of AI can be solved in just one generation.
Today, our machines can’t live up to the expectations of Minsky and Simon, but applications of AI are frequently used in entertainment, finance, human resources, automotive industry and health. Even though when we think of AI, we’re inclined to think of a robot that’s just as capable as a human being; but the term artificial intelligence represents so much more. Artificial Intelligence as a concept doesn’t need to be embedded in a robot/machine, any device that can learn, analyze and act intelligently, that is to say, any device that can pass the Turing Test is within the scope of this term. Even if a complete AI is still yet to come, we have already produced intelligent machines that are capable of many human-like tasks like driving cars, recognizing faces and extracting sentiment.
Machine Learning (ML) is an application of AI that studies algorithms that makes it possible to analyze a data set, extract patterns and make inferences without the need of additional programming. With the surge in Machine Learning’s popularity, algorithms are no longer fully in the field of mathematics, they’re also expanding their reach to the field of natural sciences; since, processing the statistical inferences obtained from data that represent the real world, is inherent to ML algorithms.
One of the biggest contributions of machine learning algorithms to a programmer’s life is that they save time. For example, if a programmer wants to write a program that checks spelling; instead of manually introducing every single possible typo to the program, she can save time by feeding the ML algorithm a data set that includes possible spelling errors. Another area of use is customizing programs specific to different needs and expectations. Let’s say that the spelling program written by our programmer was widely popular and she wants to adapt it to different languages, instead of starting from scratch for every language, she can use the same model that is closest to the language of the program already created. Another area where ML excels is detecting data that’s outside the threshold of human perception.
Of course, these are ML’s contributions to a programmer’s life, what about the tech that we use day to day
One of the most popular uses of ML is recommendation systems that allow websites to offer a more tailor-made experience to its users. Sites with an ever-growing plethora of content like Netflix, Youtube and Amazon, need to be able to present the demanded content, at the right time and in the appropriate context; this is one of the reasons why they stay ahead in the market.
For example, Netflix displays different visuals for the same production to users with different tastes. And online shopping sites like Amazon is able to recommend items to add depending on the items in user’s shopping cart and shopping history, just like a shopping assistant.
Filtering Mails Subjects and Spam:
We’ve all, at some point, had incredibly stuffed mailboxes that is filled to the brim with spam and promotions, which makes it incredibly difficult to single out mails that matter. But now mail services providers like Gmail are able to scan mail subjects with ML and categorize them into appropriate folders, which makes the user experience run more smoothly. A similar system is used to filter out spam and fraud.
Smart Personal Assistants:
If you’ve ever spoken to the personal assistant that comes in your phone or home before, you’ve experienced two sub-sections of ML; Natural Language Processing (NLP) and Deep Learning (DL). The main difference between these sub-sections of ML and ML is that these algorithms are able to process unstructured data. Voice activated assistants like Siri, Alexa, Cortana or Google’s Assistant need to be able to hear what is said, understand it correctly, and produce an appropriate response that is in a language the user can comprehend. To do all this and process an unstructured data type like natural language, the previously mentioned algorithms are employed.
Probably visited daily by most, search engine sites like Google, Yandex and DuckDuckgo all make use of ML algorithms to pair search queries with appropriate results. Specifically, they are employed in areas like indexing the web (web crawling), optimizing the sequence of search results and presenting the appropriate search result for the right context, etc.
Artificial intelligence as a concept and a main heading, machine learning as a subset of this concept and branches of this subset such as deep learning and natural language processing are persistently being subject to numerous studies. Ever-growing size of data we are creating, advancements in computational power and storage, and improvements in algorithmic capacities yield us a greater understanding of the phenomenon. Unequivocally, this better comprehension of A.I. is progressively embedding technology into our lives. We’re eagerly awaiting the days when cutting-edge tech has a bigger role in automating our daily tasks so that we can shift our focus to meaningful tasks that’s centered around creating value.
 Data produced by humans is comprised of %80 unstructured and %20 structured data. Unstructured data refers to data that doesn’t ascribe to a pre-defined data collection model, and can have a variety of different formats like video, audio, text, mobile activity, social media activity, etc. and is therefore hard to analyze. Structured data refers to data that prescribes to a pre-defined model and is usually compromised of text-based data that is easy to analyze.
Comment & Questions
We would like to hear your thoughts on this report. Please get in touch with us.
StratejiCo. is an independent Turkish corporate and public affairs consultancy firm, providing trusted advice to multinational companies and government institutions in Eurasia since 1987.