DeepMind is the artificial intelligence (AI) division operating with the Google. While DeepMind was originally an independent company, it was acquired by Google in 2014 for the reported price of more than $500 million. The goal of the acquisition was for Google to position itself more favorably in the growing AI arena and to further interests in the area of deep learning.
The Deep-Mind team have made significant progress in AI with regards to the AlphaGo program in defeating Lee Sedol – something which was considered impossible not too many years back, due to the complexity of Go, as opposed to Chess.
They’ve also made come a long way with WaveNet, which is a speech generation programme, designed to mimic to human speech, and which is light-years ahead of other similar programmes.
While the technology and its potential uses are incredibly intriguing, they also leave a lot of questions about what these constructs will do once made accessible to a larger world.
AI, Neural Networks, and Image Recognition
Recent growth in the area of AI focuses on the increased sophistication of neural networks and image recognition. One of the biggest hurdles in the use of big data is all the different types of data being collected, such as e.g. images, audio and text, just to name a few.
Most computer databases were designed with text-based datasets in mind. This simplified the analytical approach as the information was stored in a highly recognizable form. However, big data began demonstrating the value in other forms of communication, including images.
Social media increased the regularity with which images were used to communicate, making the ability to identify objects within larger images crucial for further developments. Something which has increased the need for Deep Learning and Neural networks.
What can AI do?
The reach of AI has expanded into both the business and consumer world. Advanced analytics and machine learning have taken what was often treated as a theory and begun to make it a reality. And, once notable innovations in an information technology sector begin, they often pick up steam really fast – think of social media for instance, which was a couple of years in the making, but became the next big thing – seemingly overnight in the middle of the first decade in the new millenium.
Companies are more dependent on AI-driven solutions to meet the expectations of customers. Everything from spam filters to GPS redirection around traffic issues has led consumers to expect technology to anticipate their desires and increase efficiencies in their lives.
A prime example of AI in the world today revolves on newsfeeds and search results. Over time, associated systems compile information about individual user preferences and use that information to anticipate what the user would like to see. Recommendations are improved through increased interaction, as the AI program integrates more data into its construct.
Virtual assistants run on the same premise but are still considered a fairly fledgling technology. Programs like Siri and Cortana use many AI features, but some of the results are still considered lacking in the eyes of the user. Additionally, the ease of use is often cited as a shortcoming.
The thing about AI however, is the fact that as more data is fed to the algorithms – in this case user interactions – the more accurate they become – as this video neatly illustrates at the 10:34 mark.
So as giants in the field like Apple, Microsoft, and Google compete to create the next generation of virtual assistants, the technologies will become increasingly relevant through better prediction of user behavior.
Additionally, the reach of a virtual assistant is only going to expand. These programs offer retailers an opportunity to combine an online or app-based shopping experience into one that is more personal. Instead of solely interacting with a screen, AI-generated prompts and calculations can improve the shopping experience and improve customer retention. While some of these techniques are in use, such as the inclusion of location-based recommendations, the integration of advanced analytics and customer tracking will provide additional depth to the experience.
However, the thing about AI, is that the real meat of the story lies in what we call advanced AI. So let’s talk a little about that shall we?
Implications of Advanced AI
The implications of getting to super-human AI are enourmous, and somewhat unfathomable, as we discussed in this article. However, the path to getting there is not necessarily clear, and it is not going to happen overnight, so let’s focus on the near-term potential of the slightly less intimidatingly sounding advanced AI.
Advanced AI has significant potential to change people’s lives, but along with the benefits comes a substantial amount of risk. Questions about ethics and privacy abound, as well as concerns about criminal applications of the technology and the effect on certain job market segments.
Voice Recognition and Mimicry
WaveNet has gone beyond standard voice recognition and shifted into producing more natural sounding text-to-speech systems. The technology can also mimic any human voice.
The intent of the technology is to improve interactions between computers and people. Most of the speech generation programs used to facilitate “conversations” with programs like Siri and Cortana operate by using large databases of specific sounds. The sounds are then combined to create the spoken word-based output. However, since these fragments don’t always string together naturally, the sound has an artificial component that is difficult to ignore.
Improved voice capacity can make these conversations sound more like those held with friends, family, coworkers, customer service representatives, and more. However, if also gives criminals the opportunity to recreate voices for nefarious purposes.
For example, software designed to mimic the voice of someone close to you could be used to phish for information. If you believe you are talking with a specific person, you are more likely to provide details that would otherwise be kept private.
Job Market Shifts
AI has the potential to affect the job market in new ways by severely limiting the number of individuals that are needed to complete tasks solely dominated by actual employees. For example, while various forms of technology have been able to “speak” the ability to hold a conversation is relatively new. As AI-based conversational solutions become more sophisticated, they have the potential to replace the droves of customer service representatives working at companies across the globe.
This shift can also make skill gaps between individuals more relevant. As technology generally replaces more repetitive and entry-level positions, the need for advanced education will grow. For those who can’t afford the costs associated with higher education and training, the number of employment opportunities that are available to them will only shrink while unemployment amongst this section of the population will grow.
AI and improved decision making
Some arenas where AI can become hugely advantageous is in arena’s where decision-making and predictions in complex and dynamic systems are front and center. Examples include doctors, people working in the financial markets and other skill based professionals. However, even though AI is on the whole incredibly desirable and advantageous, we can’t let ourselves be blind to the risks. A surprising risk associated with AI is how these advances will change the decision-making process of individuals. The increased use of AI to help drive decisions may lead some to become numb to the risks associated with a recommendation.
For example, if a doctor has a particular treatment in mind for a patient, but an AI-based system suggests a different course, is there a chance the medical professional will defer to the solution offered by the AI under the impression it should know best? Or, if a person decides to go against the automated recommendation, will that choice be supported by others in the organization?
When the use of a technology increases, the risk for dependence grows. And, after each successful application, the more likely a person is to assume that the next outcome will have similar results.
As we grow to trust these pieces of software, we may be less likely to scrutinize the next output. And this trend may continue even if we do not fully understand how the AI reached its result. While we may understand that pattern recognition is at the technologies core, we will truly know what data was used to come to a particular conclusion. But what if the analysis is flawed?
For example, AI designed to determine whether certain person’s with pneumonia were at a higher risk of death was intended to ensure those requiring hospitalization would get the care they needed before the condition proved fatal. However, the system misclassified individuals with asthma as being low risk.
The reason the AI drew the conclusion was that most asthma sufferers are automatically sent to intensive care when diagnosed with pneumonia and, therefore, statistically are less likely to die from the infection. However, the AI connected the presence of asthma to pneumonia as creating a lower risk situation by nature and not because of the different standard of care provided to asthma sufferers.
If a decision regarding whether an asthma sufferer needed to be hospitalized for pneumonia was made by the AI alone, the person would likely not be admitted. And this situation demonstrates the risks associated with an AI that makes faulty connections based on the data provided. So naturally one has to keep in mind, that the overall output of the system is never going to be better than the input, which is likely never going to be perfect.
Fighting the Flaws in AI
The performance of an AI system is based on the competency of those designing the systems as well as the ability of an individual to truly understand the nuances in decision-making – again the system is never better than its maker. Or put in other words; garbage in, garbage out. However, even as increased processing power allows AI to reach conclusions based on larger sets of data, choosing which data to include (and exclude) creates a potential flaw in the system.
General Knowledge and Common Sense
As with the previous asthma and pneumonia example, medical professionals understand that asthma increases the likelihood of death when a person contracts pneumonia. And since the correlation is simply understood by the medical community at large, it is easy to forget that an AI might not possess the data necessary to also have that knowledge. AI only knows what we tell it to know. It can only access data to which it is given access, and can only spot patterns based on the information it is fed. Simple oversights based on the failure to include “common sense” knowledge within the constructs leaves these systems with a shortcoming that can have dire consequences under certain circumstances.
Another potential flaw exists in the data itself. For example, an AI given access to the capabilities of a large-scale search engine can access unfathomable amounts of information about a given topic. However, just because something is printed online, that doesn’t make it true.
For example, the recent spotlight on issues relating to fake news and its impact on the public have made some of these issues all the clearer. Social media outlets were called out for the presence of false news and its potential impact on the 2016 Presidential Election in the United States. Since there were no distinguishing characteristics that separate the falsehoods from the realities, it is easy to confuse actual news reports from the skewed, or even blatantly untrue, articles created for less scrupulous purposes.
If an AI takes information from less reputable sources, the harder it is to trust the accuracy of the results. Couple that with the fact that even people may be unable to tell which pieces of information are true and which aren’t and you may see major decisions being made based on faulty recommendations that appear to be supported by real “evidence.”
Part of what makes AI intriguing is its ability to pull data from large sources and create a sense of larger meaning. However, is it possible to create completely foolproof systems that can access all of the information they need without crossing into territories that cannot be completely relied upon for accuracy?
AI and Ethics Considerations
A common point of debate regarding AI is whether it should be explored even if it can be. The totality of the ethical and moral implications of the technology are difficult to comprehend, especially as the use of AI is in a growth period.
To complicate matters further, not everyone knows when they are integrated with these sophisticated systems and when they are not. Additionally, there is no definitive screening or monitoring authority to help determine which goals are worth exploring and which should simply be left on the drawing board.
Data and Privacy
The use of large data sources to predict human behavior leave significant concerns in the area of privacy. While most digital resources come with stunning amounts of terms and conditions, the ability to opt out of certain processes isn’t always present. And, choosing not to accept the terms as set forth, means not being able to use services at all. Since not having access to basic services like email is practically unthinkable in the business and consumer world, that means many are contributing data somewhat unwillingly.
An example of this issue occurred when Target revealed a customer’s pregnancy based on advanced analytics. Data collected through frequent shopper accounts is collected and cross-compared. Certain purchasing patterns and changes suggest specific shoppers may be going through certain major life events. The intention on the part of the retailer is to use the information to create targeted marketing designed to anticipate a customer’s needs. However, since this form of analysis isn’t generally requested by the customer, it can come off as intrusive.
Predictions of Negative Behavior
A proposed use of AI involves predicting whether a person is likely to commit a crime or, in the case of those previously convicted, the likelihood to re-offend. While many see this as an issue of public safety, it also presents questions regarding whether AI should be used in such a manner and, if a risk is identified, what should be done about it.
While it is considered a film based in science fiction, Minority Report broached the topic of acting based on a prediction and not an actual action. Even though the idea of preventing crime before it occurs is enticing, the question is should someone be punished for a crime that did not yet occur.
The ethical debate regarding predictions and probabilities when patterning human behavior, and whether acting based upon these assessments is either appropriate or even legal, leaves many conflicted about whether the use of this technology should even be considered.
Ethical Concerns and DeepMind
As part of the acquisition of DeepMind by Google, an internal ethics board was created specifically in regards to the project. While specific information about their function was never specified, professionals working in the ethics field provided general commentary on an ethics board’s function.
In most cases, ethics concerns from a business standpoint relate directly to legal considerations. Issues of consumer risk and public safety remain at the forefront, as well as limiting a company’s liability related to the use of specific technologies or products. But, the lack of distinct laws governing AI complicates matters, though basic issues regarding business practices, privacy, and general liability remain in effect.
Another question focuses on whether these sorts of panels should be managed internally or externally. While issues of proprietary information and general secrecy are common in business, whether an internal board can remain objective is the larger question. The lack of separation raises concerns about influence from executives within the company, but internal boards may have more influence regarding the company’s ultimate direction should concerns arise.
Laws, Regulations, and General Governance
As previously mentioned, the lack of controlling authorities and limitations within the AI industry leave many companies a lot of flexibility when it comes to development. Further, it creates an even larger gray zone regarding its use. As the technology continues to grow and the operations associated with AI become more complex, only time will tell whether AI is ultimately going to be a boon or a bane to society at large.