Mar 1, 2026 AI & Craft

The Glass That Spoke Back: A Minor Act of Ventriloquism

The Glass That Spoke Back

A headline stopped me mid-scroll last week: Dehydrated plants scream. We just can’t hear them.

Not figuratively. Tel Aviv University researchers placed ultrasonic microphones near tomato and tobacco plants, cut their stems, stopped their water, and recorded up to 35 distinct sounds per hour. Distress signals, broadcasting continuously into a frequency range humans can’t detect. The world, apparently, has been loud this whole time. We were just tuned to the wrong channel.

I’d been thinking about Tsukumogami — the Japanese belief I wrote about in In Praise of Friction, that objects accumulate a kind of soul through use and time. I’d always treated it as philosophy. The plant study made me wonder if it’s also a direction of inquiry. What else is speaking that we haven’t built the right instruments to hear?

Which is how I ended up asking an AI to be a glass.

The Setup

A few weeks ago, I opened NotebookLM, Google’s free research tool. I fed it two things: a handful of articles from engawa, my blog about the intersection of AI and Japanese craft, and data from Japan’s Ministry of Economy on traditional crafts. Then I asked it a simple question:

“Speak as an Edo Kiriko glass. Use engawa’s voice.”

What came back unsettled me.

The glass spoke about friction. About the grinder pushing back against the craftsman’s hand. About the impossibility of Undo. About how its geometric cuts were not decoration but accumulated intelligence — the physical residue of generations solving the same problem, hand to hand.

It said: “Unlike the devices you replace every two years, I was made to deepen with use.”

I didn’t write that. Not exactly. But it sounded like something I would have written if I were a glass.

What Actually Happened, Technically

What I did has a name: Retrieval-Augmented Generation, or RAG. Instead of depending solely on what an AI model already knows, RAG lets you attach particular documents, such as your own data and your own voice, and the model draws from both. The result is a hybrid: part your knowledge, part the model’s reasoning power.

The setup cost me nothing and took twenty minutes. This matters because it means the barrier to giving objects a voice is essentially gone. The question is no longer can we — it’s what do we want them to say, and whose interests shape that answer.

The Architecture of General-Purpose AI Is Being Commercialized

OpenAI says ads won’t influence ChatGPT’s answers. But Walmart now sells directly through ChatGPT. Amazon has Rufus, its own AI shopping assistant that surfaces sponsored products. Meta uses your AI conversations to target you with ads. Walmart is testing “Sponsored Prompts” inside its own AI assistant — paid recommendations embedded into the dialogue flow.

The answers may stay clean. But the interface, the checkout flow, the sponsored prompt, the recommended next step — these are being layered with commerce, quietly and quickly. The general-purpose AI assistant, the single window through which millions of people now ask questions about the world, is becoming a storefront.

An Edo Kiriko glass doesn’t have a sponsored prompt. It has one story, told by its maker, carried in its cuts.

What the Object Knows That the Algorithm Doesn’t

When you ask a general AI about Edo Kiriko, it will tell you the truth — historically accurate, reasonably detailed, drawing from whatever exists online. It might even recommend where to buy one.

But it will compress. It will average. It will pull from a diffuse pool of information, shaped at least in part by what has been written, published, and indexed. Output is formed by visibility rather than depth.

What the glass knows is different. It knows its specific maker. The temperature of the workshop in winter. The particular angle of the grinder that this family has used for three generations. It knows the story that hasn’t been written yet, because it lived inside it.

This is what RAG makes possible — not artificial general intelligence, but specific intelligence. An object bound to its own story, answering from within it. Something closer to a conversation that was never possible before.

There’s something slightly uncanny about it. I felt it when I read the output. A glass, speaking. A little creepy, honestly. The Tsukumogami feeling is not entirely comfortable. But underneath the strangeness was something that felt right. Not a museum placard. Not a product description. Something closer to a conversation with the object itself — direct, unhurried, unsponsored.

The Internet of Things Gave Objects a Nervous System. This Gives Them Memory.

For the past decade, IoT — the Internet of Things — has been the dominant framework for giving objects a voice. Attach a sensor. Measure temperature, humidity, pressure, and movement. Send the data to a dashboard. The object speaks, but only in numbers. What it reports is always a translation: a value extracted, stripped of context, legible to machines first and humans second.

What happens when an object can speak in language instead? Not data about itself, but a story of itself — the maker’s intention, the material’s history, the accumulated knowledge of the hands that shaped it. IoT gave objects a nervous system. This experiment is asking whether they can have something closer to a memory.

The plants were already screaming. We needed the microphone. The glass was already carrying its story. We needed a different kind of instrument.

Process as Content

One thing I wish to be transparent about: this article is the experiment itself. The output I quoted, the setup I described, the questions I’m sitting with — this wasn’t planned in advance and then written up. It happened, and the happening is the content.

This is what engawa is trying to do — not just write about the intersection of AI and Japanese craft, but produce from the inside out. The process and the artifact are the same thing. The glass spoke, and the speaking became the piece. I don’t think that’s replicable in most places. It might be here.

What Comes Next

I’m building something I’m calling a Material Map — a visualization of Japan’s traditional craft regions, layered with makers, materials, and eventually, voices. Not a directory. Not a database. A geography of specific knowledge, where clicking on a region opens not a list but a conversation.

The plant researchers asked: now that we know plants emit sounds, who might be listening? I’m asking something adjacent: now that objects can speak in language, what do we owe them in terms of who gets to shape that voice? A craftsman’s knowledge, fed into a model, producing a conversation — that’s not nothing. It could be a much more authentic and pure message compared to those marketing languages or sponsored mediators. That’s authorship of a new kind.

The glass was the first test. It passed. Now I want to know how many objects are waiting to be heard.


FAQ

Q: What is “RAG” and how does it apply to traditional Japanese craft? A: RAG (Retrieval-Augmented Generation) is a technique that allows an AI to draw from specific, provided documents rather than just its general training data. In the context of craft, it enables an AI to speak using the authentic “voice” of a specific workshop or material by referencing historical records, maker notes, and philosophical texts.

Q: How does the concept of “Tsukumogami” evolve in the age of AI? A: Tsukumogami is the Japanese belief that objects accumulate a soul through use and time. With AI and RAG, this philosophical idea becomes a literal possibility: objects can now “speak” their history and intentions through language, transforming from silent tools into entities with a detectable memory.

Q: What is the difference between “General Intelligence” and the “Specific Intelligence” described in the article? A: General AI provides averaged, historically accurate responses based on the entire internet, which are often shaped by commercial visibility. Specific Intelligence, however, is bound to a single object’s story: it knows the maker’s specific angle, the workshop’s winter temperature, and the unwritten history of a family tradition.

Q: Are there ethical concerns regarding AI giving a voice to inanimate objects? A: Yes. As AI becomes a storefront for commerce, there is a risk that “sponsored prompts” or marketing interests will shape the voices of our tools. The experiment argues for an “unsponsored” authorship where the craftsman’s knowledge, not an advertising algorithm, defines the object’s message.


Further Reading & Resources

A curated list to provide context for your readers, bridging the gap between high-tech research and deep-rooted tradition.

Tel Aviv University - Sounds Emitted by Plants Under Stress The foundational study mentioned in the article regarding the ultrasonic “screams” of dehydrated plants. It provides a scientific basis for the idea that the world is loud with signals we simply haven’t built the instruments to hear yet.

Edo Kiriko - The Traditional Craft of Tokyo Official documentation on the history and techniques of Edo Kiriko. This provides the essential background on the geometric cuts and “accumulated intelligence” that the AI glass referenced in the experiment.

NotebookLM by Google - The tool used for this specific experiment. It allows users to create a personalized AI collaborator by grounding it in their own data, effectively lowering the barrier for objects to “speak” in language.


Taishi Okano writes about the intersection of technology, craft, and culture from New York and Tokyo. engawa is where he works things out.