Ask not what AI can do for Pullquote…

ChatGPT4o’s response to the prompt: “Create a pen and ink drawing that shows a giant wall of library card drawers with a tall rolling ladder in front”

Pressflex’s Pullquote extension for the Chrome browser, which makes it simple to store, tag, deep-link, and share quotes and facts, could help people tap into AI’s genius for processing ideas.

Background: Writing recently about AI’s osmosis with human thought and knowledge revived my interest in the idea that, in parallel to the massive “internal” work of upgrading AI’s algos, datasets and hardware, AI will also need lots of “external” help interacting with day-to-day subjective human needs, whether individually or collectively. In other words, as VCs and behemoths like Open-AI and Nvidia invest in strengthening AI’s brain-muscles-skeleton-digestion, AI also needs skin that’s both responsive to human touch AND invites human touch. Pullquote might help with this task.

Pullquote: Pullquote, as a day-to-day catalog of noteworthy snippets of an individual’s knowledge, facts, and interests, might serve as a human-centric skin for AI. Pullquote seems to be particularly well-suited to serve as an interface for AI: it’s lite — sitting in the Chrome browser and operated with a couple of clicks — and it adds metadata to quotes — users summarize and tag ideas/facts — and it creates a unified storage space for an individual’s interests. (The Pressflex team created Pullquote in 2011 to empower people to deep-link to a specific quote buried in the long text of a URL. We gradually added to this original function features for classifying, storing, and sharing quotes. In 2019, Google added “scroll-to-text” to Chrome, which covers Pullquote’s original use case. One limitation: Pullquote is desktop-only, because Chrome extensions don’t operate on mobile.) 

Talking with ChatGPT: To play around with how AI might enhance Pullquote and vice versa, I conversed with ChatGPT about Pullquote, then asked GPT to analyze some of my own Pullquote collections. Here’s AI’s take on my #Trump pullquotes, and ChatGPT’s take on my #carbs feed, and AI’s analysis of the feed of my recent Pullquotes

AI-suggestions for AI-in-PQ: No earth-shattering insights yet, but I did get a good list of possible AI-angles for Pullquote, some simple, some more complex. (Asking AI specifically for advice on driving network effects in Pullquote provided no extra insights.) Here’s the highlights I noted or extrapolated from GPT’s suggestions:

  1.  AI suggests tags when I grab a quote
  2. AI suggests people/tags to follow
  3. AI identifies a surprise connection of the day that sits at the intersection of ideas that interest you
  4. AI builds a newsletter based on tags you’ve used
  5. AI charts trends over time in your tagging
  6. AI suggests a summary for each quote you grab (currently our headline) 
  7. AI suggests a counterpoint for any quote you grab
  8. AI summarizes quotes you’ve given a specific tag to (our headline function)
  9. AI, like a tutor, suggests context for quotes or links to related information
  10. For each new quote, AI links to related quotes… or links to an essay integrating all related quotes for a given tag
  11. AI reveals links between similar quotes in my catalog… or other people’s?
  12. AI drafts an article (or states a thesis) based on quotes with a specific tag

Steps I took:

  1. To prime the task, I asked AI to summarize Pullquote.com
  2. I asked GPT to suggest ways to integrate AI into PQ. 
  3. I asked AI to summarize my quotes at https://pullquote.com/hc/tags/trump/
  4. Since that summary was uninteresting, I asked AI to identify what areas were missing from my quotes. 
  5. I then asked AI to find a quote about one of the missing areas it has identified (Trump’s foreign policy) … it searched five sites and came up with something. 
  6. I asked AI to illustrate the ideas in the Trump quotes. (It refused!) 
  7. I asked AI for suggestions about using AI to drive network effects for PQ. 

Takeaways: 

  • I guess the most startling thing I realized is that ChatGPT can interact in multiple ways, right now, with Pullquote without any work on our part. 
  • The “what’s missing from these quotes” function was interesting. (In fact this is my favorite use of AI when I’m writing an essay — “what am I missing?” or “what are other examples of this phenomenon I’m describing?”
  • Newsletter engines like Substack have gained ascendence since we created Pullquote… good trend for PQ to ride?  

Network effects: I’d note that only one of the ideas above — AI recommending other Pullquoters — creates even weak network effects, exciting a user to beg friends to participate in Pullquote so the original user benefits. (Facebook and eBay have strong network effects; adding more users creates more opportunities for current users to communicate or trade.)  Network effects are crucial to growing web tools without spending money on marketing. Network effects are also important if I’m right that AI’s growth will be strongest when it facilitates socializing or other p2p communication/collaboration, just as social media fueled the Internet’s explosive growth. I think the biggest winner of the “skin for AI” game will be an AI service that enhances p2p collaboration, with AI as the medium for socialized creation or processing of information. 

Ideas to explore: 

  • PQ for work teams.   

Hallucinate much? AI is too human, and maybe that’s good

ChatGPT4o’s reply to my request: “create a pen and ink drawing of a computer with a human face.”

When a new technology comes along, we usually describe the tool and its role in our lives by invoking last-generation tools (horseless carriages) or inventing neologisms (noob). But, as far as I can tell, we’re not seeing similar behavior when it comes to AI. Even though some form of AI has been around for nearly 60 years, we’re only using terminology that’s identical with what we use in describing human traits and skills. AI “thinks,” “suggests,” and even “hallucinates.” I’ll argue that the absence of new lingo around AI, the lack of a linguistic boundary when we talk about AI versus when we talk about human thinking/knowledge, is a hint of what lies ahead — soon we won’t distinguish human knowledge from AI, soon AI won’t be a tool, but an appendage we take for granted.

Naming new technology. One of the fascinating characteristics of radically new technologies is that we, whether laymen or experts, often describe the new technology in terms of existing tools.

  1. The “horseless carriage,” became the “automobile,” which was shortened to “auto” and finally “car.” (And today, “electric vehicles” are generally called EVs.)
  2. Movies were first seen as “moving pictures.”
  3. The telephone was originally the “speaking telegraph.”
  4. Likewise, the radio was first the “wireless telegraph.”
  5. Television was, for a while, “radio with pictures.”
  6. The Advanced Research Projects Agency Network became ARPANET, which became the Internet, and then an interpretive layer was added called the World Wide Web, which finally became, very simply, the web.
  7. The “web log” (even in 2005, the New York Times still used “Internet web log,” stubbornly belaboring the obvious) eventually was distilled into “weblog,” which settled finally into “blog.”
  8. Eventually, video blogs made the obvious sideways jump into “vlog.”

Naming new behavior. Once we’ve settled on a name for the new technology and the technology establishes its unique new role in our lives, social ripple effects emerge that are even more linguistically varied and interesting. We can chart the new technology’s expanding roles in the terminology and constructs that we subsequently borrow or invent to describe the new technology’s unique input, output, content and behavior.

  1. TV planted the seed for “couch potatoes,” “sitcoms” and “miniseries.” (Which I’ll admit I pronounced “minizeries” for a while.)
  2. At the simplest level, “blog” morphed from being just a noun (an entire personal site) to also being a verb (as in “I blog”) and an individual blog post (as in “check out my newest blog.”) Blogs added “blog rolls” (links to blogs the author recommended, links which themselves became a kind of informal consortium for sharing ideas and traffic) and “comment sections,” which eventually begat “trolls,” commentors whose main goal was triggering ire or misery.
  3. Facebook created the new sociology (and addiction) of “liking” other people’s content.
  4. Twitter begat “memes” and “hashtags,” which launched both political and fashion revolutions.
  5. Netflix birthed “binge watching” which may have crippled some bars.
  6. Twitch popularized “live streaming.”
  7. Video gaming spawned “rage quitting,” “noob,” “loot boxes,” the salutation “GG” (good game!) and many new terms and behaviors.
  8. Social media species often inspire their own cultures, the passport into which is often arcane, insider-only terminology. The giant Reddit community is a great example of this, with terms including “Neckbeard,” “FTFY” and “Karmawhore.” You can see a dictionary of dozens of terms frequently used on Reddit here.

AI is different. Though ChatGPT, the most powerful AI engine to date, is hugely more robust than its precursors, various forms of AI have been around more than 60 years. Along the way, AI has had various names — tool augmentations and neologisms like machines learning, cybernetics, natural language processes, and deep learning. What’s interesting is that the latest and most powerful iteration of AI, ChatGPT, which is apparently being used by nearly 200 million people and can generate believable-enough essays and images based on giant databases of word associations, has not spawned neologisms like “neckbeard” or attracted ill-fitting metaphors patched together from prior tools, like “horseless carriage.” Instead, we’re content to talk about ChatGPT like it’s human: AI “thinks,” “argues,” “draws” and “creates.” ChatGPT acts so human, we often can’t help muttering “thank you” when it answers a question.

Will this change? There’s a strong argument that we will not create new lingo to talk about ChatGPT. The philosopher Alford Korzybski famously wrote that “the map is not the territory.” In other words, the map is a tool that captures and portrays only two or three dimensions of a given reality. Far more is omitted than captured or represented. It’s obvious to any adult viewer that a Youtube video isn’t reality, that Wikipedia, which I just linked to re Korzybski, is not the sum of all human knowledge.

But with ChatGPT, the map is very very close to being the territory. The tool is close (and getting ever closer) to being identical with its human creators, or at least their collective, articulated creations. AI datasets contain trillions of data points, linked together via mathematical associations that locate each piece of information in a grid composed of thousands of dimensions. (More here on how AI works.)

We’re approaching a point reminiscent of Argentine writer Jorge Luis Borges’ conception of “a Map of the Empire whose size was that of the Empire, and which coincided point for point with it,” a representation so detailed and comprehensive that it becomes indistinguishable from the reality it depicts. Similarly, AI, especially advanced models like ChatGPT 4, strives to become an almost perfect mirror of human knowledge and behavior, capturing the vast intricacies of our language , knowledge and thought processes.

How to express the immensity of this knowledge trove in more human terms? ChatGPT has mapped at least 300 billion words and their interrelationships, can converse in more languages (>50) than any single human, has ingested more books than all the residents of New York City could read in a lifetime (that sounds good, but frankly, I’m BSing here… more on this below); can consume and evaluate in seconds a 25,000 word essay or contract and offer meaningful suggestions. The latest AI engine, GPT4o, scores in the 90th percentile on law exams. Most importantly, in my experience at least, AI’s answer to a given question often avoids (or flags) the biases and blind spots that plague any single human’s perceptions.

In the past, humans haven’t had difficulty distinguishing between new technology and our own minds or souls because there was always an obvious gap between the creator and the creation, the tool maker and the tool. That’s not the case with AI. Zooming back a little (and maybe I’ve cheated by waiting so long to note this), it’s worth highlighting that AI is an acronym for “artificial intelligence,” which means that AI is itself a modified human capacity, intelligence, rather than a modified tool like a carriage or the telegraph. This linguistic choice signifies our recognition of AI’s potential not just to support but also to seamlessly integrate with human knowledge and behavior. The line between natural and artificial intelligence will become increasingly faint and unarticulated.

AI hallucinates. There’s one AI behavior that humans like to gloat about: AI tends to hallucinate. If AI doesn’t know the answer to a question, often it won’t reply with “I don’t know” or “this is my best guess,” but instead serves up a rational sounding package of information. The idea that machines generating ideas from large datasets “hallucinate” has been around for a while. (See hundreds of academic papers on AI hallucinating here.)

Unfortunately, at least if AI’s interrogator seeks bankable certainty, AI’s hallucinations are sometimes spectacularly wrong. Most famously, AI likes to add a jarring extra finger or earlobe or leg or joint to the people in images it creates. More insidiously, AI will invent a reasonable sounding idea or fact, giving no notice that it’s writing fiction. AI even sometimes justifies its answers by referring to academic papers, title and author included, that it’s fabricated.

Some hallucinations are being ironed out. Previously, presented with the simple logic problem — “4 cars can travel from LA to SF in 3 hours. How long will it take 8 cars?” — ChatGPT 3 might answer “six hours.” Today, though, GPT4 replies correctly, “The number of cars traveling from LA to SF does not affect the travel time for the trip.” Congrats, GPT, for not adding “you idiot!”

Hallucinating has been the term of art to describe this phenomena. Entire careers are being made around identifying and categorizing AI’s hallucinations. Recently, though, some philosophers have suggested that because the act of “hallucinating” implies the absence of deceptive motivation or mendacity, a better term might be “bullshitting,” which emphasizes the possibility that AI intends to create fiction, or in less anthropomorphic language, that AI is architected to generate output that seems credible, regardless of the AI’s problematic certainty about its veracity. (A friend says AI is a people pleaser.)

Maybe bullshit is AI’s superpower. Whether AI is hallucinating or bullshitting or people pleasing, it’s important to remember that this tendency may be, as software engineers say, “a feature, not a bug,” which is to say beneficial rather than (or in addition to) being detrimental. In short, maybe AI’s willingness to bullshit is good; maybe bullshitting is even one of AI’s superpowers. If AI only told us things that humans already knew, shared formulations that have already been proven, hypotheses that, over time, had been supported by enough evidence to become scientific certainties, it couldn’t help us explore the unknown. If AI were imprisoned by facts and logic, it would be dry and dull, a punctilious meth-headed Encyclopedia Britannica rather than a boozy Sorcerer’s Apprentice. Beyond all the eager buzz foreseeing AI as a generator of cost-efficiencies and cancer cures, AI may be uniquely useful when it unselfconsciously ventures “outside the box” of our extent knowledge, processes and assumptions.

AI is us. Utility aside, bullshitting makes AI more human and less machine-like. Humans hallucinate and/or bullshit all the time. Usually we’re not being mendacious; we’re exercising a skill that’s been selected for by evolutionary forces, a story-telling and certainty-evoking device that gives individuals and groups the speedy resolve to cope with sudden unanticipated threats or the gumption to start something new, to get up and over the steep hill ahead and into a green valley, a better reality, a promised land. When humans enter in a new, unknown space or situation, whether physically or emotionally or intellectually, when we aren’t quite sure what’s going on, the skill of bullshitting drops into gear, often inadvertently and in a millisecond. If we couldn’t invoke this superpower, we’d expend huge time, energy and attention on evidence-gathering and second-guessing. We’d live perched on the edge of a bottomless pit of uncertainty that might drive us crazy. Challenged with a novel and urgent puzzle, we scan all the data and life-lessons we’ve accumulated and quickly extrapolate an answer. If we’re attentive and have time, we may append “my guess is,” but often we’re in a hurry or panicked and just blurt out a hypothesis as a fact, a hunch as a plan, a puzzled grunt as an answer. If we’re smart, we double-check this assumption/guess as our reality continues to unfold, feeding us more data that confirms or denies or reshapes the hunch. But often we just steam ahead until the hypothesis gets us to safety or… crashes unequivocally into a brick wall.

Utility aside, bullshitting makes AI more human, less machine-like, and, at least in some situations, more powerful. After all, whether Moses was God’s messenger or a bullshit artist, the Israelites got to Jordan.

Conclusion: Sixty years after AI sprouted, we’re still relying on human characteristics and skills to describe its skills and characteristics. We’ve never previously relied on anthropomorphic language to describe new technology; we’ve always used recent tech as linguistic building block (horseless carriages, etc) to describe even radically new tech. But we can’t just describe AI as “a souped-up computer.” It’s way too souped up, and way too human acting to make that analogy credible. As an embodiment of billions of our own creations — words, phrases, chapters, drawings, theories, software code — AI mirrors humanity’s own strengths and weaknesses. Rather than being a tool to grasp, it’s the hand, us. The better we understand this cross-mapping and self-identity (or hallucination!), the better we can use AI as our own superpower.

Caveat To be clear, I’m neither a philosopher nor a mathematician. So all of the above is just a guess, an extrapolation from some words and data points, an exercise that may point me or others in a fruitful new direction. (And, yes, I had ChatGPT’s help bullshitting.)