It’s a brave new world indeed, where Alexa can help your kindergartner with her homework and your smart refrigerator can assess the contents inside and suggest the dinner menu. Your refrigerator will not only share a recipe for béchamel pasta, it can also inform that you may be running out of cheese in two days and that your chicken will be fresh only until tomorrow. While you hop in for a quick shower, Alexa will talk to your smart oven to pre-heat to the desired temperature (think of LG’s spectacular smart-home appliances run by the open AI platform ThinQ.
With AI, your home now comes with an intelligence quotient, where devices that were once markers of comfortable living have morphed into an army of personal assistants. By managing minute, humdrum details of your daily life, these assistants are constantly redefining the meaning of comfort itself.
Even through the lows of AI – the now septuagenarian technology that saw two ‘winters’ of withering investor interest – the AI winters, there has always been deep fascination about the immense possibilities of machines mirroring the capabilities of the human brain to learn, reason and make decisions.
The dystopian fear of AI
With the resurgence of AI over the past decade in the backdrop of phenomenal increase in computing power, sophistication of sensor technology and greater inroads in deep learning and machine learning (made possible by the data goldmine churned out by over 30 billion interconnected devices), it is no wonder that today, tech companies are becoming the most valuable.
The top five companies in the world as per market capitalization are all tech (Alphabet, Amazon, Apple, Facebook and Microsoft) and several others, across verticals, are working hard to make technology a key differentiator rather than a mere enabler.
However, several studies have explored the implications of way AI is ‘taking over’ human life, particularly on the job front. The perception that AI will eventually outpace humanity has come with every technology revolution or advancement. However, this fear has been allayed by history, as the three waves of industrial revolution did not create mass unemployment and layoffs but took over tasks that were rudimentary and even dangerous to life and health, freeing workforces to move up the manufacturing value chain.
Industry 4.0 is no different in conferring the same economies of scale in an increasingly digitized world, which will not only create much more prosperity, but bring greater value to human life.
AI in the workplace – a catalyst, not competitor
As an integral vehicle of Industry 4.0, AI is revolutionizing human function at the workplace, trying to become increasingly useful to both the physical and intellectual workforce.
In the manufacturing sector, even a simple adoption of sensor technology has enabled AI driven automation, resulting in advanced decision making and predictive maintenance of equipment in the heretofore unanticipated window between free time and failure. These are crucial aspects that have direct implications on production costs and brand reputation – imagine the crisis that emerges in when an engine fails in an aircraft or when a robotic arm malfunctions during surgery.
In the services sector, AI is now central to achieving the hallmark of ‘consumer centricity’ through increasing personalization of experience even as the customer base proliferates rapidly. In healthcare where the transformative power of technology is most palpable, AI assisted diagnostic imaging is emerging as a main driver for preventive and precision care, thereby playing a central role in narrowing the accessibility and affordability gap.
However, in both the physical and intellectual realms, AI does not pose an existential threat to human resources; on the contrary, it has created better outcomes for professionals. While AI does not replace a radiologist but vastly aids in his delivery of faster and more accurate diagnosis. AI in the industrial setting changes job descriptions and creates new roles that call for an advancement of human potential through upskilling.
Even at lower levels of technology functioning, AI does not signal the replacement of human intervention. Developing AI algorithms, for instance, involves extensive data mining, where we need people to collect raw data and tag them.
It is humans who are limiting the potential of AI!
While AI is becoming an intrinsic part of the digital identity, whether as people or corporations, the cognitive tech revolution has raced further in the consumer field as compared to the industrial sector. Complexities of enterprise legacy systems and a relative scarcity of digital infrastructure and relevantly skilled workforce has meant that we still have a long way to go before AI is part of an organization’s digital muscle memory.
Another aspect is circumscribing technology with legal frameworks that are not on par with the level of technological advancement. The high-profile fallout of data breaches in influential tech companies has shown that the world needs to come together to set the lines and regulations considering that countries have borders but the data don’t.
The biggest challenge of AI adoption, however, is the availability of raw data. We need policies that clearly define access to data, like the General Data Protection Regulations in the EU.
With the advent of AI, there are infinite opportunities available for advancement. But unfortunately, only finite number of innovators exist. There is a tremendous difference in the way enterprises functioned in the 1990’s and today – there is greater responsibility on enterprises to become a smart workplace. However, at present, there is a mindset barrier and a lack of adaptability in work culture that is hindering the rate of technology adoption.