BootstrapLabs Workshop: The Hard Things About Deploying and Scaling AI
There’s no question about it, Artificial Intelligence is booming, the market is expected to grow 20 fold in the next 10 years. What are some of the challenges companies face when building true enterprise grade AI systems? How will companies adopt this new wave of AI technologies? That’s what the experts and attendees we assembled on November 30th for the Applied AI Workshop discussed. Attendees included executives and C-level from Softbank, Forever 21, AT&T, World Bank Group, Capital One, Samsung, Booz Allen Hamilton, Syniverse and more.
A special thanks for supporting the Artificial Intelligence community to the host of the workshop Wilson Sonsini Goodrich & Rosati.
Meet the BootstrapLabs Applied AI Workshop Speaker lineup:
Thomas Campbell, Founder & President, FutureGrasp, LLC
Thomas advises organizations on trends and implications of emerging technologies. As the first National Intelligence Officer for Technology, he served as the focal point for all activities related to emerging and disruptive civil technologies.
Jane McFarlane, Founder & CEO, Seurat Lab
Jane has over 30 years of experience in high performance computing, data analytics and geospatial mapping. She’s been responsible for directing industry research groups including: HERE (a leader in the geospatial mapping), Imara (a lithium ion battery company, and OnStar at General Motors (the first at-scale telematics solution).
Alex Holub, Co-founder & CEO, Vidora
Alex studied AI throughout his academic career at Cornell University and during his Ph.D at Caltech. He founded Vidora with Abhik Majumdar and Philip West in 2012, to put AI in the hands of everyone from marketers to data scientists to execs, providing them a simple platform to ask questions and use answers to automate and optimize their business.
One of the biggest challenges mentioned throughout the night was talent: attracting, retaining and growing the pipeline. The competition is fierce and startups and big companies alike have a hard time competing with the giants like Amazon, Facebook and Google.
It’s interesting to see that because of the talent gap, companies are heavily investing in code that can create code as Thomas Campbell mentioned and “when we get to that point when code creating the code is better than humans creating the code, that might start lifting AI to the next level”.
A big hurdle that Thomas Campbell sees in the near future is cybersecurity, a “tsunami of cybersecurity issues” to be exact. Because cyberbots as opposed to humans will be “faster, more efficient, never resting, omnipresent” and they will be used both for offense and defense, so as he puts it: the future will be AI vs. AI.
#AI vs #AI – the cyber bots will be faster & more efficient – this is one of the biggest issue today! #Cybersecurity pic.twitter.com/F5oYUonQ0i
— BootstrapLabs (@bootstraplabs) December 1, 2017
Prof. Jane McFarlane at UC Berkeley, presented a different set of challenges that AI faces in terms of the data it requires. To power machine learning algorithms, you need tons of data. But, “when you’re in a giant bucket of data, how do you pick out the bad?”
She’s worked in the space for several years and the data used to feed the machine learning algorithms always comes with some bad nuggets you have to parse out. Because of the difficulty associated with weeding out bad data, Jane says autonomous transportation won’t happen in her lifetime, “I’m the Debbie Downer of autonomous transportation”.
Another interesting point that Jane brought up is how overvalued Big Data is, she says “there are some nuggets in Big Data that are valuable, but the rest of it is redundant and costs a lot of money to store.”
Jane Macfarlane on stage during the Bootstraplabs Applied AI Workshop #AAI18 #ai #ml #data #maps pic.twitter.com/CzkW69Rw4L
— BootstrapLabs (@bootstraplabs) December 1, 2017
Alex Holub from Vidora presented another set of challenges that companies scaling up AI projects run into, how do you figure out which methods/techniques to use? With so many options, figuring out the best way to clean the data, which feature engineering and model to leverage is very challenging for teams with limited resources. That’s exactly what Vidora is solving with their product Cortex.
Cortex learns based on the raw data, which feature cleaning, feature engineering and model technique to use. You can think of it as putting probabilities on which is going to be more successful by looking at the raw data set, based on what has worked in past projects with similar input.
The Artificial Intelligence industry is moving faster than ever before, but there are still big challenges such as: talent, cybersecurity and data veracity that we need to overcome to truly push the industry forward.