Saturday, December 28, 2024
Google search engine
HomeGuest BlogsDebunking the “No Human”​ Myth in AI

Debunking the “No Human”​ Myth in AI

What goes up must go down, and the hype around AI will inevitably deflate sooner or later.

One unfortunate consequence of the hype is that it created the widely-shared perception that AI reached seemingly overnight a stage where it can be fully automated, leading both to endless possibilities, as well as concerns about its impact on jobs and society.

However, this is not the reality just yet and, both in private conversations and on social media, I’m starting to increasingly sense a backlash – the general theme being that “so many humans are involved behind the scenes” in various AI products or companies. This is sometimes ushered in Theranos-Like tones, as if the horrible underbelly of the beast was about to be exposed.

So let’s make it clear: today, scores of humans are involved just about everywhere in AI, whether in tiny startups or massive tech companies. In fact, most AI products are very much NOT fully automated, at least not in an end-to-end, 100% bulletproof way. It is probably ok for the general press to get a bit carried away with AI. However, we in the tech industry should probably better understand this reality, and acknowledge it as a necessary step in the process of building a major new wave of technology products.

Humans everywhere

For anyone who cares to look, it is fairly obvious that humans are used across the chain in AI products. Our portfolio company Upwork sees many tech companies big and small leveraging its freelancer network around the world for AI related tasks. Mechanical Turk is widely used (as are, increasingly, companies like Crowdflower). Last I heard, Facebook’s “M” bot was driven by humans, at least during its beta phase. Our portfolio company x.ai employs a team of AI trainers in its quest to collect a large pool of labeled data in order to deliver a fully autonomous scheduling agent.

Humans are everywhere in AI. And I’m not even talking about “human in the loop” businesses, where the final state of the business is a combination of humans and machines working collaboratively. Rather, I’m referring more specifically to businesses whose ultimate goal is complete or near complete automation through AI.

Let’s think through a few examples of the many ways humans are used in AI today:

1) Building the data set. As has been said countless times, having a large data set matters immensely in AI, particularly if you’re going to use particularly data-hungry deep learning algorithms. If you’re not Google or Facebook, you need to find ways to either acquire or build your own data set. Entrepreneurs being by definition particularly resourceful, you see many clever human-based hacks designed to build the data set, from humans driving cars around to acquire driving data, to humans manning bots in a structured way meant to acquire the right conversational data in the right format (see, again, the example of Facebook’s M), as a first step towards automation. Plenty of humans, both in the US and overseas, are involved in those processes.

2) Expert labeling. As you build your data set, you need to label it – this is an absolute must to train the machine. This is a complicated task to handle, particularly at scale. Many AI startups correctly identify labeling as a core aspect of their competitive advantage.

Some have built a group of (very human) experts in specific fields where such experts are in reasonably rare supply – think for example of the AI startups in the radiology field (Enlitic, Imagen, etc.) that need access to top radiologists to label the data before the machine can take over. Or startups in other very specialized fields, such as auto collision repair insurance (Tractable).

There is a whole field focused on how to augment those human experts: Interactive Machine Learning (there’s a great blog post on the topic here). You see startups building entire infrastructure, with AI for example surfacing the right image to the right expert, in order to make the labeling process faster and cheaper.

3) “Last mile” fail safe. One the most exciting aspects of AI is that it can be thrown at pretty much any problem. However, as in anyone in the field knows, we’re nowhere near the situation where AI will solve any of those problems 100% accurately and 100% of the time.  For certain types of problems (“low product risk” situations), this doesn’t really matter – having Netflix’s machine learning algorithms recommend the wrong movie would be mildly annoying (not that it happens much), but that’s about it. For other problems (“high product risk” situations), the AI getting it wrong even in tiny number of cases could have serious real-world impact, and possibly dramatic consequences, such as failing to detect cancer. In many of those situations, it makes sense to have the AI surface the issues that it cannot resolve with enough confidence, and have humans handle those.

4) Services. In the enterprise, behind the glossy marketing, plenty of AI products are closer to tool boxes that require significant customization, in the form of services provided by plenty of… humans.

Should we care?

Other than the fact that it is not nearly as cool to have humans involved, does the fact that most AI is not fully automated really matter? Let’s look at the three constituencies involved:

1) Users: On the whole, my sense is that most people don’t care that much how the “sausage is made”, as long as the product is great and delivers as promised. Of course, we should care immensely about blatant and deliberate misrepresentations. If a company tells us its AI products are 100% automated but in reality entirely operated by humans with no intention to automate anything in the future, or if an enterprise company sends out company data to Mechanical Turk when it claims its on-premise AI did all the work, this is essentially fraud and should be treated as such. But in the vast majority of cases, the reality is a lot more nuanced, and I believe that what matters most to end users is ultimate product performance.

2) Entrepreneurs: Founders with deep technical backgrounds may feel the temptation to keep optimizing the AI to get it to full automation prior to launch, but eventually pragmatism must prevail. From my perspective, it is the smart thing to do for entrepreneurs to involve humans when necessary, as long as it is a means to an end, with the ultimate goal remaining full automation. Worth noting: should they remain at the stage where they use a lot of humans and little automation, entrepreneurs will be stuck with a low margin business that will be increasingly hard to finance and will have low acquisition potential and/or value – probably not a great long-term strategy.

3) Investors: VCs should go into AI investments with eyes wide open. We’re now clearly in the phase of the cycle where many MBA graduates are building an “AI startup” because it is the trend of the moment. Regardless of how much open source machine learning software is available, AI remains really, really hard technically. If a company doesn’t have a team with an overwhelmingly deeply technical DNA in machine learning and data engineering, it is unlikely it will ever build a fully automated AI, and will use lots of humans instead. This is fairly easy to diligence.

Conclusion

For all the hype, we’re still very much in the “training wheels” phase of AI. The prospect of fully automated AI products is fascinating, but for the foreseeable future, the process will involve a lot of humans. Except in a few rare cases, I believe it’s all just fine, and part of the journey.

 

Original Source.

RELATED ARTICLES

Most Popular

Recent Comments