Technology giant acquired Indian machine-learning startup Tuplejump Software
iPhone maker Apple Inc. (NASDAQ: AAPL) is seeking to expand its expertise in artificial intelligence (AI) with its acquisition of Hyderabad, India-based machine-learning startup Tuplejump Software Pvt. Ltd, as reported by Bloomberg on September 23rd, 2016. Tuplejump’s software specializes in processing and analyzing big sets of data quickly. Tuplejump has about a dozen U.S.-based employees, including founder Rohit Rai.
AI has become a key area of focus for technology giants such as Alphabet Inc.’s (NASDAQ: GOOG) Google, Facebook Inc. (NASDAQ: FB), Microsoft Corp. (NASDAQ: MSFT), and Amazon.com Inc. (NASDAQ: AMZN) that are competing with Apple to develop virtual assistants that can interact with users through speech. Google’s rival product has the ability to understand the intent of a request, while Amazon’s Alexa has an edge in understanding different accents, dialects, and languages. Meanwhile, Facebook is seeking to build intelligent chatbots into its network.
Tuplejump is Apple’s third buy in 2016
The Tuplejump deal is Apple’s third acquisition in 2016, after it purchased Seattle-based Turi Inc. for about $200 million in August 2016, aimed at expanding its computing capabilities for its products and services. Turi helps developers create and manage software and services that use a form of AI called machine learning. It also has systems that let companies to build recommendation engines, detect fraud, analyze customer usage patterns, and better target potential users, which Apple could potentially integrate with its future products.
Apple’s acquisition is aimed at gaining an edge in AI, particularly in the field known as pervasive computing, where software tries to automatically infer what people want. Turi’s technology could feed into Apple’s Siri digital assistant and help define new ways computers interact with people.
On January 7th, 2016, Apple purchased Emotient, a company that uses AI to recognize and act upon facial expressions. The San Diego, California-based Emotient announced in May 2015, that it had been granted a patent for a method of collecting and labeling as many as 100,000 facial images a day so computers can better recognize different expressions.
In 2015, Apple acquired a pair of voice-centric AI startups, VocalIQ and Perceptio, to bolster Siri. VocalIQ specialized in using machine learning to allow voice assistants to engage in more realistic conversation. Perceptio focused on helping AI systems run on devices while sharing limited amounts of personal user data.
Growing importance of AI in Apple products
Apple has remained heavily invested in AI over the past couple of years and has also begun to integrate these technologies into the iPhone and Siri. Machine learning and AI help computers automatically understand images, videos, and spoken words. The technologies also allow systems to take actions or make recommendations on such data. Apple has already begun to show the fruits of its AI investments via better keyword recognition by Siri across multiple product lines. Apple will release a new version of its photo-management program for iPhones and iPads that uses AI to recognize objects in photos. Apple will also bring machine learning capabilities to its iMessage application by adding a feature this fall that translates words in texts into emoji icons.
Tech majors chant AI mantra
Artificial intelligence, which encompasses computing methods like advanced data analytics, computer vision, natural language processing, and machine learning, is hastily gaining ground and transforming the way businesses operate. Machine learning and its subset deep learning are key methods for the expanding field of AI. Tech firms are conducting major research in AI aimed at achieving a breakthrough in the rapidly developing space of man-machine interaction.
Google’s U.K.-based DeepMind unit, which is working to develop super-intelligent computers, created almost humanized speech ever achieved from a computer, according to the Company’s blog post on Friday, September 9th, 2016. Named as WaveNet, the new AI system acts as a deep neural network that is capable of generating speech from a human voice sample and then generating raw audio waveforms of its own. WaveNet is much better than existing text-to-speech systems (TTS), but still short of being as convincing as a real human’s speech. WaveNet is designed to mimic how parts of the human brain function and can imitate human speech by learning how to form the individual sound waves that a human voice creates.
Even chipmaker Intel Corp. (NASDAQ: INTC) is seeking to become future-ready by embedding its chips and systems with AI functionalities. As part of these plans, Intel announced on August 8th, 2016, that it has signed a definitive agreement to acquire startup Nervana Systems, a leader in deep learning and machine learning. The San Diego, California-based Nervana Systems will boost the deep learning performance of Intel Xeon and Intel Xeon Phi processors, as per Intel’s blog post. Nervana Systems will collaborate with Intel’s Data Center Group, which needs products with built-in AI services such as voice and picture recognition.
Apple developing smart-home device a la Amazon Echo
On September 23rd, 2016, Bloomberg reported that Apple is developing an Echo-like smart-home device based on the Siri voice assistant. Started more than two years ago, the project has completed R&D and is now in prototype testing. Like Amazon.com’s Echo, the device is designed to control appliances, locks, lights, and curtains through voice activation. If the product reaches the market, it would be Apple’s most significant product since the launch of the Apple Watch in 2014.
Meanwhile, Alphabet is also working on a similar device called Google Home. Apple is attempting to differentiate itself from Echo and Google Home with more advanced microphone and speaker technology as well as facial recognition sensors. Apple’s acquisition of facial recognition startup Faceshift in the recent past may give it capabilities to develop a device that acts based on a person’s presence in a room or a person’s emotional state.
Besides controlling other smart-home devices, Apple’s speaker would be able to process many of the Siri commands available on the iPhone. Apple has also considered integrating mapping information into the speaker, allowing the device to notify a user of an impending appointment. Apple had earlier sought to integrate the functionality of an Amazon Echo-like device into its Apple TV, but did not go through with these efforts. Instead, Apple went ahead with developing the voice-command features into a remote control for the latest version of set-top boxes in October 2015.
Apple’s stock ended the day at $112.88, gaining 0.15%, at the close on Monday, September 26th, 2016, having vacillated between an intraday high of $113.39 and a low of $111.55 during the session. The stock’s trading volume was at 29,795,206 for the day. The Company’s market cap was at $608.25 billion as of Monday’s close.