porn porntube

Google hastens AI adoption by renting out AI chip time

Google hastens AI adoption by renting out AI chip time

The IoT will impact every area of our lives, at work, in our homes and even during our leisure time. At the heart of the IoT is the artificial intelligence that takes data from sensors, analyzes it and then makes a decision using all of the information available. That decision may be when a driverless car needs to brake to avoid an object, or it may even tell you to call a doctor if your vital signs deviate from the normally accepted limits. It will recognise and interpret your wishes when you speak to your Alexa or even decide the context of your Google search to provide the most relevant answers.

Google has been at the forefront of AI research for quite some time. In the 2000s, automated speech recognition had plateaued until Google started taking an interest. Backed by a host of datacentres intended to interpret search queries, the Mountain View company put that processing power to good use in its voice search app for the iPhone, which could use location data to give answers to questions like, “Where is the nearest Starbucks?”

That technology has only improved in the years since. In fact, Google even started to design and manufacture its own processors, as off-the-shelf components couldn’t keep up with the advancement of the technology. Google termed these AI processors, tensor processing units or TPUs. The company also designed its own networking processors to shift data about more efficiently for AI applications. This internal development also had the side effect of not subjecting Google to the ebbs and flows of the semiconductor market, ensuring a constant supply of silicon.

Having built the technology, Google showcased its ability in applications such as the Google Voice digital assistant. Now, the company sees another potential revenue stream and has decided that it will rent AI processing time in its datacentres to outside parties. This is important because the way the AI works is that the system AI processor for standalone products is “trained” to operate correctly. Initially it will have a start and end point and restrictions programmed into it. Then different scenarios are fed in and it will determine the best way to get to the result. For example, in a self-driving car, the processor would be shown a variety of signs in different road conditions until it can recognise specific signs, no matter what the weather is like, or if the sign is partially obscured. From there it can calculate what to do when it recognises the sign.

This process is time consuming and requires major computational power. Google giving developers the ability to access its datacentres and use the custom hardware and software for the AI training period can cut down the time taken till the processor is trained from weeks to days or even hours. This applies not just to object detection and recognition like that found in driverless vehicle vision systems, it can also apply to managing factories, or analyzing clinical samples. Google’s decision to open up its technology to outsiders can democratise the development of AI solutions to some extent, as developers no longer need to have access to their own datacentres, and the shorter training time through dedicated equipment should also make the process cheaper overall. The decision has the potential to speed development of AI solutions extensively and speed products to market. And if the IoT lives up to its potential of changing everyone’s lives for the better, then Googles decision could bring those benefits to us sooner.

PSD