IBM’s people-centric approach is the only blueprint your AI startup needs

IBM has only known its initials for so long that many of us have to stop and think about what the letters stand for. International business machines.

I was reminded of the company’s particular focus at the TNW 2022 conference last week when Seth Dobrin, IBM’s first chief AI officer, took the stage to talk about artificial intelligence

As Dobrin put it, “IBM doesn’t do consumer AI.” You won’t be downloading IBM’s virtual assistant for your smartphone anytime soon. Big Blue doesn’t get into selfie app AI filter game.

Greetings, Humanoids

Sign up for our newsletter now for a weekly digest of our favorite AI stories delivered to your inbox.

Simply put, IBM is here to provide value to its customers and partners and to create AI models that make people’s lives easier, better, or both.

That’s all pretty easy to say. But how does a company that isn’t focused on creating products and services for the individual consumer actually talk such a talk?

According to Dobrin, it’s easy: care about how individual people are affected by the models you monetize:

We are very strict about the type of data we record and make money from.

During a discussion with Tim Bradshaw of the Financial Times at the conference, Dobrin used the example of large parameter models such as GPT-3 and DALL-E 2 to describe IBM’s approach.

He described those models as “toys,” and for good reason: They’re fun to play with, but they’re ultimately not very useful. They are prone to unpredictability in the form of nonsense, hate speech and the potential to disseminate personal personal information. This makes it dangerous to use them outside of laboratories.

However, Dobrin told Bradshaw and the public that IBM was also working on a similar system. He called these agents “fundamental models,” meaning they can be used for multiple applications after being developed and trained.

The difference with IBM, however, is that the company takes a people-centered approach to developing its fundamental models.

Under Dobrin’s leadership, the company selects data sets from various sources and then applies internal terms and conditions before integrating them into models or systems.

It’s one thing if GPT-3 accidentally spits out something offensive, things like this are expected in labs. But it’s a very different situation when, as a hypothetical example, a bank’s production language model starts to deliver nonsense or private information to customers.

Fortunately, IBM (a company that partners with companies across a spectrum of industries, including banking, transportation and energy) doesn’t believe in cramming a massive database of unverified data into a model and hoping for the best.

Which brings us to what is arguably the most interesting take-away from Dobrin’s chat with Bradshaw: “be ready for regulation.”

As the old saying goes, BS in, BS out. If you can’t control the data you train with, your AI startup’s life will be tough when it comes time for regulation.

And the Wild West of AI acquisitions will soon come to an end as more and more regulators look to protect citizens from predatory AI firms and corporate force majeure.

If your AI startup creates models that won’t or can’t comply in time for use in the EU or US once the regulatory hammers fall, your chances of selling them to or being acquired by a company that does business internationally are zero.

No matter how you cut it, IBM is an outlier. It and Dobrin apparently enjoy the idea of ​​delivering compliance-ready solutions that help protect people’s privacy.

While the rest of the big tech spends billions of dollars building environmentally damaging models that serve no other purpose than hitting arbitrary benchmarks, IBM is more concerned with the results than speculation.

And that’s just weird. That’s not how the majority of the industry does business.

IBM and Dobrin are trying to redefine the position of big tech in the AI ​​sector. And it turns out that if your profits aren’t driven by ad revenue, subscriber numbers, or future hype, you can build solutions that are as effective as they are ethical.

And that leaves the vast majority of people in the AI ​​startup world with some questions to answer.

Is your startup ready for the future? Do you train models ethically, taking human outcomes into account, and are you able to explain the biases ingrained in your systems? Can your models be made GDPR, EU AI and Illinois BIPA compliant?

If the current free-for-all dies and VCs stop throwing money at prediction models and other vaporware or prestidigitation-based products, can your models still offer business value?

There’s probably still a little bit of money to be made for companies and startups jumping on board the hype train, but there’s probably a lot more to be made for those whose products can actually withstand an AI winter.

Human-centric AI technologies are not only a good idea because they make people’s lives better, they are also the only machine learning applications worth betting on in the long run.

When the dust settles, and we’re all less impressed by the prestidigitation and salon tricks that big techs spend billions of dollars on, IBM will still be using our planet’s limited energy resources here to develop solutions with individual human outcomes in mind. .

That’s exactly the definition of “sustainability” and why IBM is poised to become the de facto technology leader in the global artificial intelligence community under Dobrin’s hitherto expert leadership.

Leave a Comment

Your email address will not be published.