Artificial Intelligence, Self-Driving and Unit4’s Evolution
Posted by Derren Nisbet
Recruitment, hiring, and building a workforce was once about finding the most qualified people for the task. These days, with advances in technology and artificial intelligence coming thicker and faster than ever before, greater importance is being placed on adaptability, and fitting with the company’s culture.
For organisations to be able to keep up with the modern day demands to do more with less, roles will have to evolve and adapt with the introduction and adoption of AI and self-driving technologies. The top-down structure we’re used to seeing in teams is becoming one of multi-discipline, with organisations redesigning their manpower resources to be team-orientated, flexible, and prepared for machine learning’s growing role in business and innovation.
Improved mapping and the evolution of self-drive
In June 2017, Uber hit 5 billion trips. Its mapping efforts began in the USA back in 2015, and last year, it turned its attention to British cities including London, Manchester, Birmingham and Leeds. Although relying on existing mapping applications at first, these fell short of supplying essential data such as traffic patterns, and suitable pick-up/drop-off points.
“The street imagery captured by our mapping cars will help us improve core elements of the Uber experience, like ideal pick-up and drop-off points and the best routes for riders and drivers. And here in London it will also help us improve routing for innovations like our car-sharing option uberPOOL.” - via Uber’s Blog, September 2016.
And so, after “doubling down” its investment in mapping technologies and development, Uber started working towards having an offering resembling Apple and TomTom.
Toward the end of 2016, Uber launched a small fleet of driverless cars in Pittsburgh, and it is now considered “normal” to see these cars navigating the city. Other companies, such as Ford, General Motors, BMW, Volvo, Google, Apple, Tesla, and Lyft, are also working on their own driverless rideshare car initiatives; and with all of them jostling for their place in the market, there is potential to gather a staggering volume of data around people’s commuting habits.
Self-service technology and the gathering of data
Back in October 2016, Tesla announced that all of its cars would be equipped with self-driving hardware as standard. The new onboard computer is tasked with making sense of data gathered from eight 360 degree cameras, twelve ultrasonic sensors, and a forward-facing radar with enhanced processing. The computer runs Tesla’s neural net for vision, sonar and radar processing software, creating a view of multiple surroundings simultaneously that a human driver could never access alone.
In January of this year, Elon Musk, founder of Tesla Motors, announced that new and improved Autopilot features were rolling out to vehicles with Tesla’s second-gen hardware suite:
… and updates to these features are set to keep rolling out until Tesla’s cars can enable full autonomy. With 400,000 orders for the entry level Model 3, the electric sedan, and the demand for the Model S and Model X still high, there is massive buzz around the potential accessibility and adoption of this intelligent technology.
Google’s DeepMind Technologies and “real” machine learning
Artificial intelligence was always about the intelligent human that pre-programmed a system to perform specific tasks. Relying on historically gathered data, the system doesn’t cope well in novel or unpredictable situations and the inability to adapt obviously presents limitations.
Mustafa Suleyman, cofounder of DeepMind, acquired by Google in 2014, explains that their general-purpose learning algorithms can be combined to make an AI system or “agent”:
“These are systems that learn automatically. They’re not pre-programmed, they’re not handcrafted features. We try to provide as large a set of raw information to our algorithms as possible so that the systems themselves can learn the very best representations in order to use those for action or classification or predictions.
The systems we design are inherently general. This means that the very same system should be able to operate across a wide range of tasks.” - via TechWorld, March 2016.
AI, historically, uses repeated trial and error so that its neural network can learn a task, but is unable to learn a separate task without overwriting the first one; that is, they can’t learn to play poker without “forgetting” how to play chess. In this way, they can never learn as a human does.
Advances in assumptions
DeepMind researchers recently made a breakthrough in developing a programme that can learn one task on top of another, drawing on the skills it learnt from that first task by “remembering” and applying what it believes were the most important tasks it learnt. This is known as “sequential learning” and is a long way from general-purpose artificial intelligence, but building systems that can learn new tasks on top of old ones, is a big step in the right direction toward flexible, efficient learning.
Investing in AI and self-driving technologies at Unit4
Our products have evolved over the years to become part of the essential make up of ambitious service-centric organisations. Business World On!, our fully integrated cloud ERP solution, harnesses the latest advances in key technologies such as social, mobile, predictive analytics, cloud and big data. Our recently released Digital Assistant, Wanda, uses various Microsoft services, such as Office365, Microsoft Azure LUIS and Microsoft Azure’s Bot framework to support Natural language, cognitive services, and machine learning to capture day-to-day data such as travel expenses.
We continue to invest in future technology such as AI and self-driving to move toward a smarter language tool and better trained technology with in-memory capabilities. Unit4 works towards building solutions with this intelligence built-in because, in business terms, bots bring increased efficiency and excelled productivity. Organisations stand to save money while time usually spent completing manual operations and data classification is better spent on value-adding work, not to mention fewer operating costs and improved service for customers. As we as people become more accustomed to using bots, daily tasks are merging with technology more than ever, but we believe working with digital assistants will be become more simple. Consider interactions with those most well-known bots; Amazon Echo, Microsoft’s Cortana, and Apple’s Siri, but more accessible, less complex, and with better communication between man and machine.
With the continued and increasing use of cloud-based systems in 2017 and beyond, business functions like expenses and timesheets, for example, will become more self-driving, and more streamlined. With automation, life becomes easier when performing digital tasks. Change happens fast, and our systems will extend and transform just as rapidly.