Devin Coldewey

Because AI is so poorly defined, it’s really easy to say your device or service has it and back that up with some plausible-sounding mumbo jumbo about feeding a neural network a ton of data on TV shows or water use patterns.

This entry was posted in information. Bookmark the permalink.

3 Responses to Devin Coldewey

  1. shinichi says:

    ‘AI-powered’ is tech’s meaningless equivalent of ‘all natural’

    Devin Coldewey

    What does artificial intelligence have in common with the price of eggs?

    Say you’re trying to decide between 9 or 10 different varieties of eggs at the store. One catches your eye: “All natural.” Well, that’s nice, natural is good and they’re only 30 cents more — you buy those. Now, those chickens and the eggs they produce may or may not be more natural than the others — because there’s no official or even generally agreed-upon definition of natural. It’s a common ploy to make you pay 30 cents for nothing. That same exact thing is becoming a problem in tech — but with AI.

    There is no official or generally agreed-upon definition of artificial intelligence — if you’re curious about why that is, I wrote a very woolly post called WTF is AI that you might enjoy. But this lack of consensus hasn’t stopped companies great and small from including AI as a revolutionary new feature in their smart TVs, smart plugs, smart headphones and other smart macguffins. (Smart, of course, only in the loosest sense: like most computers, they’re fundamentally dumb as rocks.)

    Now, there are two problems here.

    It’s probably not AI

    The first problem is this: Because AI is so poorly defined, it’s really easy to say your device or service has it and back that up with some plausible-sounding mumbo jumbo about feeding a neural network a ton of data on TV shows or water use patterns.

    “The term is complete bullshit,” said the CEO of a major robotics company that shall remain nameless, but certainly employs in its robots what most would agree could be called AI. It’s a marketing term used to create the perception of competence, because most people can’t conceive of an incompetent AI. Evil, perhaps (“I’m sorry, Dave, I can’t do that”), but not incompetent.

    This recent flowering of AI into a buzzword fit to be crammed onto every bulleted list of features has to do at least partly with the conflation of neural networks with artificial intelligence. Without getting too into the weeds, the two aren’t interchangeable, but marketers treat them as if they are.

    The neural networks we hear so much about these days are a novel way of processing large sets of data by teasing out patterns in that data through repeated, structured mathematical analysis. The method is inspired by the way the brain processes data, so in a way the term artificial intelligence is apropos — but in another, more important way, it’s very misleading.

    AI is a phrase with its own meaning and connotations, and they don’t really match with what neural networks actually do. We may not have defined AI well, but we do have a few ideas. And it’s safe to say that while these pieces of software are interesting, versatile and use human thought processes as inspiration in their creation, they’re not intelligent.

    Yet any piece of software that, at any point in its development, employs a convolutional neural network, deep learning system or what have you, is being billed as “powered by AI” or some variation thereof.

    Now, if even experts can’t say what AI is, what hope is there for consumers? It’s just another item on a list of features and likely as opaque as the rest to the person reading it. But they know AI is high-tech and being worked on by all the big companies, so the product with AI in it must be better. Just like the person choosing “natural” eggs over another brand — one that could just as easily have put that label on their own box, with as little justification.

    And even if it were…

    The second problem is that even if there were some standard for saying what AI is and isn’t, and we were to grant that these systems met it, these aren’t the kinds of problems that AI is good at solving.

    One company, for instance, touted an AI-powered engine for recommending TV shows. Think about that. What insight could emerge from unleashing a deep learning system on such a limited set of data around such a subjective topic? It’s not a difficult problem to determine a recommendation for someone who likes CSI: Miami. They’ll like Person of Interest or something. These aren’t subtle, hidden patterns that only emerge after close scrutiny, or require hours of supercomputer time to figure out.

    And in fact, as Jaron Lanier explained well in The Myth of AI, because the data originates from people — e.g. people who watch this also watch that — the artificial intelligence is completely dependent on human intelligence for all the decisions it makes. People already did the hard part — the development of taste, the selection of what shows they like and don’t like, judging the quality of the episodes, of the acting and direction — and all the computer is doing is searching through human intelligence and returning relevant results.

    Similar claims are made on behalf of IoT devices like thermostats and now shower heads that monitor your use and recommend things or save energy when they know you’re not there. An AI for your home! It tells you when you’re low on milk! It identifies who’s at the door! These are similarly spurious: the data sets are sparse and simple, the outputs binary or highly limited. And just because a device isn’t quite as dumb as the one you’ve been using for 30 years, that doesn’t make it smart. On the contrary, these claims of intelligence are… artificial.

    It’s a fiction cultivated by tech companies that AI meaningfully improves many of these things — in addition to the fiction that it’s AI in the first place. It’s even possible that relying on machine learning is detrimental to their purpose, since the methods by which these models arrive at their conclusions are often obscure.

    This is a bit like another marketing trick often found on egg cartons. Ever seen one that promises that the chickens are raised on an all vegetarian diet? So thoughtful! Problem: chickens aren’t vegetarians, they eat worms and bugs — have done for millions of years. And really, it’s more than likely that taking them off their native diet will negatively affect their livelihood and the quality of the eggs. (Incidentally, what you want is “pasture-raised.”)

    Maybe you’re thinking, okay Mr. Big AI Expert, if none of this counts as AI, what does? And why is it you aren’t so choosy about the term AI when it comes to writing clickbait headlines?

    Well, this is all just my opinion, but when we’re talking about AI as a concept being researched or developed by big companies and universities, it’s okay to stretch the definition a bit. Because what we’re talking about is really a nascent class of software and there’s no sense being pedantic when the ideas fall under the umbrella most people would understand as AI. But when companies use that fundamental vagueness as a deceptive sales pitch, I feel I have to object. And so I have.

    Misleading, exaggerated or outright fabricated feature lists are a hallowed tradition in tech, so this practice is nothing new. But it’s good to point out when a new weasel word enters the lexicon of trend-hunting marketers. Perhaps there will be a day when AI is actually something you’ll look for in a refrigerator, but that day is not today.

  2. shinichi says:


    by Devin Coldewey

    translated by Sako


    さて、今あなたはお店で9から10種類の卵からどれかを選ぼうとしている。ある1つがあなたの目を捕える「全て自然(All natural)」。まあ、いいんじゃないかな、自然であることは良いことだし、30セントばかり高いだけだし。ということであなたはそれを買う。さて、そうして選んだ鶏や卵は他のものよりも、より「自然」であるかもしれないし、そうではないかもしれない。なぜなら公式には、それどころか一般的な合意としても、自然に対する定義は存在していないからだ。それは何でもないもののために、あなたに30セント余計に支払わせるための戦略だ。全く同じことが技術でも問題になりつつある…そこにAIを伴って。

    公式には、あるいは一般的に合意された人工知能の定義は存在しない。どうしてそうなのかについて興味が湧いたのなら、WTF is AI(いまさら聞けないAI入門【英語】)という記事を投稿してあるので、楽しんで貰いたい。しかし、こうしたコンセンサスが欠如しているからといって、大小様々な企業がAIを革新的な機能として取り込むことを止めるわけではない。そのスマートテレビ、スマートプラグ、スマートヘッドフォン、その他のスマートなガジェットに対しての取り込みを(ここで言う「スマート」とは、もちろんもっとも緩い意味で使われている。多くのコンピューターのように、彼らは岩のように使えない代物なのだ)。












    例えばある企業は、テレビ番組を推薦するAI搭載エンジンを売り込んでいる。それについて考えてみよう。そうした主観的なトピック周辺で得られる限られたデータセットに、ディープラーニングシステムを適用することで、一体どんな洞察が得られるのだろうか?「CSI:マイアミ」を好きな人への推薦を決めることは難しいことではない。彼らは「PERSON of INTEREST 犯罪予知ユニット」やその類のものを好むだろう。これらは、精査の後にやっと明らかになったり、把握のためにはスーパーコンピュータを何時間も使ったりしなければならないような、微妙で隠れたパターンではない。

    そして実際、Jaron LanierがThe Myth of AI(AIの神話【英語】)で適切に説明したように、データは人間に由来するので(例えば「これを見たひとは、こちらも見ています」といったもの)、人工知能は、それが下す全ての判断に関して完全に人間の知性に依存しているのだ。嗜好の発見、何を好み何を好まないかの選択、エピソード・演技・演出の品質の判断といった難しい部分は、人間が既に済ませている。そしてコンピューターがしていることといえば、人間の知性を検索して関連性の高い結果を返していることだけなのだ。

    同様のことは、あなたの使用状況をモニターして何かを推薦したり、あなたがそこにいないことを知ったときにエネルギーを節約する、サーモスタットやシャワーヘッドのようなIoTデバイスにも当てはまる。AIをあなたの家庭にもどうぞ!牛乳が少なくなっていることを教えます!誰がドアの前にいるのかを識別します!これらも、同様に見せかけのものだ。データセットはまばらでシンプル、出力は2値、または非常に限られたものに過ぎない。そして、あるデバイスが、あなたが過去30年間使用してきたものほどは無能ではないという理由だけで、それがスマートだということにはならない。逆に、知能に対するこれらの主張は実に… 人工的だ。






Leave a Reply

Your email address will not be published.