Ask not what AI can do for Pullquote…

ChatGPT4o’s response to the prompt: “Create a pen and ink drawing that shows a giant wall of library card drawers with a tall rolling ladder in front”

Pressflex’s Pullquote extension for the Chrome browser, which makes it simple to store, tag, deep-link, and share quotes and facts, could help people tap into AI’s genius for processing ideas.

Background: Writing recently about AI’s osmosis with human thought and knowledge revived my interest in the idea that, in parallel to the massive “internal” work of upgrading AI’s algos, datasets and hardware, AI will also need lots of “external” help interacting with day-to-day subjective human needs, whether individually or collectively. In other words, as VCs and behemoths like Open-AI and Nvidia invest in strengthening AI’s brain-muscles-skeleton-digestion, AI also needs skin that’s both responsive to human touch AND invites human touch. Pullquote might help with this task.

Pullquote: Pullquote, as a day-to-day catalog of noteworthy snippets of an individual’s knowledge, facts, and interests, might serve as a human-centric skin for AI. Pullquote seems to be particularly well-suited to serve as an interface for AI: it’s lite — sitting in the Chrome browser and operated with a couple of clicks — and it adds metadata to quotes — users summarize and tag ideas/facts — and it creates a unified storage space for an individual’s interests. (The Pressflex team created Pullquote in 2011 to empower people to deep-link to a specific quote buried in the long text of a URL. We gradually added to this original function features for classifying, storing, and sharing quotes. In 2019, Google added “scroll-to-text” to Chrome, which covers Pullquote’s original use case. One limitation: Pullquote is desktop-only, because Chrome extensions don’t operate on mobile.) 

Talking with ChatGPT: To play around with how AI might enhance Pullquote and vice versa, I conversed with ChatGPT about Pullquote, then asked GPT to analyze some of my own Pullquote collections. Here’s AI’s take on my #Trump pullquotes, and ChatGPT’s take on my #carbs feed, and AI’s analysis of the feed of my recent Pullquotes

AI-suggestions for AI-in-PQ: No earth-shattering insights yet, but I did get a good list of possible AI-angles for Pullquote, some simple, some more complex. (Asking AI specifically for advice on driving network effects in Pullquote provided no extra insights.) Here’s the highlights I noted or extrapolated from GPT’s suggestions:

  1.  AI suggests tags when I grab a quote
  2. AI suggests people/tags to follow
  3. AI identifies a surprise connection of the day that sits at the intersection of ideas that interest you
  4. AI builds a newsletter based on tags you’ve used
  5. AI charts trends over time in your tagging
  6. AI suggests a summary for each quote you grab (currently our headline) 
  7. AI suggests a counterpoint for any quote you grab
  8. AI summarizes quotes you’ve given a specific tag to (our headline function)
  9. AI, like a tutor, suggests context for quotes or links to related information
  10. For each new quote, AI links to related quotes… or links to an essay integrating all related quotes for a given tag
  11. AI reveals links between similar quotes in my catalog… or other people’s?
  12. AI drafts an article (or states a thesis) based on quotes with a specific tag

Steps I took:

  1. To prime the task, I asked AI to summarize Pullquote.com
  2. I asked GPT to suggest ways to integrate AI into PQ. 
  3. I asked AI to summarize my quotes at https://pullquote.com/hc/tags/trump/
  4. Since that summary was uninteresting, I asked AI to identify what areas were missing from my quotes. 
  5. I then asked AI to find a quote about one of the missing areas it has identified (Trump’s foreign policy) … it searched five sites and came up with something. 
  6. I asked AI to illustrate the ideas in the Trump quotes. (It refused!) 
  7. I asked AI for suggestions about using AI to drive network effects for PQ. 

Takeaways: 

  • I guess the most startling thing I realized is that ChatGPT can interact in multiple ways, right now, with Pullquote without any work on our part. 
  • The “what’s missing from these quotes” function was interesting. (In fact this is my favorite use of AI when I’m writing an essay — “what am I missing?” or “what are other examples of this phenomenon I’m describing?”
  • Newsletter engines like Substack have gained ascendence since we created Pullquote… good trend for PQ to ride?  

Network effects: I’d note that only one of the ideas above — AI recommending other Pullquoters — creates even weak network effects, exciting a user to beg friends to participate in Pullquote so the original user benefits. (Facebook and eBay have strong network effects; adding more users creates more opportunities for current users to communicate or trade.)  Network effects are crucial to growing web tools without spending money on marketing. Network effects are also important if I’m right that AI’s growth will be strongest when it facilitates socializing or other p2p communication/collaboration, just as social media fueled the Internet’s explosive growth. I think the biggest winner of the “skin for AI” game will be an AI service that enhances p2p collaboration, with AI as the medium for socialized creation or processing of information. 

Ideas to explore: 

  • PQ for work teams.   

Hallucinate much? AI is too human, and maybe that’s good

ChatGPT4o’s reply to my request: “create a pen and ink drawing of a computer with a human face.”

When a new technology comes along, we usually describe the tool and its role in our lives by invoking last-generation tools (horseless carriages) or inventing neologisms (noob). But, as far as I can tell, we’re not seeing similar behavior when it comes to AI. Even though some form of AI has been around for nearly 60 years, we’re only using terminology that’s identical with what we use in describing human traits and skills. AI “thinks,” “suggests,” and even “hallucinates.” I’ll argue that the absence of new lingo around AI, the lack of a linguistic boundary when we talk about AI versus when we talk about human thinking/knowledge, is a hint of what lies ahead — soon we won’t distinguish human knowledge from AI, soon AI won’t be a tool, but an appendage we take for granted.

Naming new technology. One of the fascinating characteristics of radically new technologies is that we, whether laymen or experts, often describe the new technology in terms of existing tools.

  1. The “horseless carriage,” became the “automobile,” which was shortened to “auto” and finally “car.” (And today, “electric vehicles” are generally called EVs.)
  2. Movies were first seen as “moving pictures.”
  3. The telephone was originally the “speaking telegraph.”
  4. Likewise, the radio was first the “wireless telegraph.”
  5. Television was, for a while, “radio with pictures.”
  6. The Advanced Research Projects Agency Network became ARPANET, which became the Internet, and then an interpretive layer was added called the World Wide Web, which finally became, very simply, the web.
  7. The “web log” (even in 2005, the New York Times still used “Internet web log,” stubbornly belaboring the obvious) eventually was distilled into “weblog,” which settled finally into “blog.”
  8. Eventually, video blogs made the obvious sideways jump into “vlog.”

Naming new behavior. Once we’ve settled on a name for the new technology and the technology establishes its unique new role in our lives, social ripple effects emerge that are even more linguistically varied and interesting. We can chart the new technology’s expanding roles in the terminology and constructs that we subsequently borrow or invent to describe the new technology’s unique input, output, content and behavior.

  1. TV planted the seed for “couch potatoes,” “sitcoms” and “miniseries.” (Which I’ll admit I pronounced “minizeries” for a while.)
  2. At the simplest level, “blog” morphed from being just a noun (an entire personal site) to also being a verb (as in “I blog”) and an individual blog post (as in “check out my newest blog.”) Blogs added “blog rolls” (links to blogs the author recommended, links which themselves became a kind of informal consortium for sharing ideas and traffic) and “comment sections,” which eventually begat “trolls,” commentors whose main goal was triggering ire or misery.
  3. Facebook created the new sociology (and addiction) of “liking” other people’s content.
  4. Twitter begat “memes” and “hashtags,” which launched both political and fashion revolutions.
  5. Netflix birthed “binge watching” which may have crippled some bars.
  6. Twitch popularized “live streaming.”
  7. Video gaming spawned “rage quitting,” “noob,” “loot boxes,” the salutation “GG” (good game!) and many new terms and behaviors.
  8. Social media species often inspire their own cultures, the passport into which is often arcane, insider-only terminology. The giant Reddit community is a great example of this, with terms including “Neckbeard,” “FTFY” and “Karmawhore.” You can see a dictionary of dozens of terms frequently used on Reddit here.

AI is different. Though ChatGPT, the most powerful AI engine to date, is hugely more robust than its precursors, various forms of AI have been around more than 60 years. Along the way, AI has had various names — tool augmentations and neologisms like machines learning, cybernetics, natural language processes, and deep learning. What’s interesting is that the latest and most powerful iteration of AI, ChatGPT, which is apparently being used by nearly 200 million people and can generate believable-enough essays and images based on giant databases of word associations, has not spawned neologisms like “neckbeard” or attracted ill-fitting metaphors patched together from prior tools, like “horseless carriage.” Instead, we’re content to talk about ChatGPT like it’s human: AI “thinks,” “argues,” “draws” and “creates.” ChatGPT acts so human, we often can’t help muttering “thank you” when it answers a question.

Will this change? There’s a strong argument that we will not create new lingo to talk about ChatGPT. The philosopher Alford Korzybski famously wrote that “the map is not the territory.” In other words, the map is a tool that captures and portrays only two or three dimensions of a given reality. Far more is omitted than captured or represented. It’s obvious to any adult viewer that a Youtube video isn’t reality, that Wikipedia, which I just linked to re Korzybski, is not the sum of all human knowledge.

But with ChatGPT, the map is very very close to being the territory. The tool is close (and getting ever closer) to being identical with its human creators, or at least their collective, articulated creations. AI datasets contain trillions of data points, linked together via mathematical associations that locate each piece of information in a grid composed of thousands of dimensions. (More here on how AI works.)

We’re approaching a point reminiscent of Argentine writer Jorge Luis Borges’ conception of “a Map of the Empire whose size was that of the Empire, and which coincided point for point with it,” a representation so detailed and comprehensive that it becomes indistinguishable from the reality it depicts. Similarly, AI, especially advanced models like ChatGPT 4, strives to become an almost perfect mirror of human knowledge and behavior, capturing the vast intricacies of our language , knowledge and thought processes.

How to express the immensity of this knowledge trove in more human terms? ChatGPT has mapped at least 300 billion words and their interrelationships, can converse in more languages (>50) than any single human, has ingested more books than all the residents of New York City could read in a lifetime (that sounds good, but frankly, I’m BSing here… more on this below); can consume and evaluate in seconds a 25,000 word essay or contract and offer meaningful suggestions. The latest AI engine, GPT4o, scores in the 90th percentile on law exams. Most importantly, in my experience at least, AI’s answer to a given question often avoids (or flags) the biases and blind spots that plague any single human’s perceptions.

In the past, humans haven’t had difficulty distinguishing between new technology and our own minds or souls because there was always an obvious gap between the creator and the creation, the tool maker and the tool. That’s not the case with AI. Zooming back a little (and maybe I’ve cheated by waiting so long to note this), it’s worth highlighting that AI is an acronym for “artificial intelligence,” which means that AI is itself a modified human capacity, intelligence, rather than a modified tool like a carriage or the telegraph. This linguistic choice signifies our recognition of AI’s potential not just to support but also to seamlessly integrate with human knowledge and behavior. The line between natural and artificial intelligence will become increasingly faint and unarticulated.

AI hallucinates. There’s one AI behavior that humans like to gloat about: AI tends to hallucinate. If AI doesn’t know the answer to a question, often it won’t reply with “I don’t know” or “this is my best guess,” but instead serves up a rational sounding package of information. The idea that machines generating ideas from large datasets “hallucinate” has been around for a while. (See hundreds of academic papers on AI hallucinating here.)

Unfortunately, at least if AI’s interrogator seeks bankable certainty, AI’s hallucinations are sometimes spectacularly wrong. Most famously, AI likes to add a jarring extra finger or earlobe or leg or joint to the people in images it creates. More insidiously, AI will invent a reasonable sounding idea or fact, giving no notice that it’s writing fiction. AI even sometimes justifies its answers by referring to academic papers, title and author included, that it’s fabricated.

Some hallucinations are being ironed out. Previously, presented with the simple logic problem — “4 cars can travel from LA to SF in 3 hours. How long will it take 8 cars?” — ChatGPT 3 might answer “six hours.” Today, though, GPT4 replies correctly, “The number of cars traveling from LA to SF does not affect the travel time for the trip.” Congrats, GPT, for not adding “you idiot!”

Hallucinating has been the term of art to describe this phenomena. Entire careers are being made around identifying and categorizing AI’s hallucinations. Recently, though, some philosophers have suggested that because the act of “hallucinating” implies the absence of deceptive motivation or mendacity, a better term might be “bullshitting,” which emphasizes the possibility that AI intends to create fiction, or in less anthropomorphic language, that AI is architected to generate output that seems credible, regardless of the AI’s problematic certainty about its veracity. (A friend says AI is a people pleaser.)

Maybe bullshit is AI’s superpower. Whether AI is hallucinating or bullshitting or people pleasing, it’s important to remember that this tendency may be, as software engineers say, “a feature, not a bug,” which is to say beneficial rather than (or in addition to) being detrimental. In short, maybe AI’s willingness to bullshit is good; maybe bullshitting is even one of AI’s superpowers. If AI only told us things that humans already knew, shared formulations that have already been proven, hypotheses that, over time, had been supported by enough evidence to become scientific certainties, it couldn’t help us explore the unknown. If AI were imprisoned by facts and logic, it would be dry and dull, a punctilious meth-headed Encyclopedia Britannica rather than a boozy Sorcerer’s Apprentice. Beyond all the eager buzz foreseeing AI as a generator of cost-efficiencies and cancer cures, AI may be uniquely useful when it unselfconsciously ventures “outside the box” of our extent knowledge, processes and assumptions.

AI is us. Utility aside, bullshitting makes AI more human and less machine-like. Humans hallucinate and/or bullshit all the time. Usually we’re not being mendacious; we’re exercising a skill that’s been selected for by evolutionary forces, a story-telling and certainty-evoking device that gives individuals and groups the speedy resolve to cope with sudden unanticipated threats or the gumption to start something new, to get up and over the steep hill ahead and into a green valley, a better reality, a promised land. When humans enter in a new, unknown space or situation, whether physically or emotionally or intellectually, when we aren’t quite sure what’s going on, the skill of bullshitting drops into gear, often inadvertently and in a millisecond. If we couldn’t invoke this superpower, we’d expend huge time, energy and attention on evidence-gathering and second-guessing. We’d live perched on the edge of a bottomless pit of uncertainty that might drive us crazy. Challenged with a novel and urgent puzzle, we scan all the data and life-lessons we’ve accumulated and quickly extrapolate an answer. If we’re attentive and have time, we may append “my guess is,” but often we’re in a hurry or panicked and just blurt out a hypothesis as a fact, a hunch as a plan, a puzzled grunt as an answer. If we’re smart, we double-check this assumption/guess as our reality continues to unfold, feeding us more data that confirms or denies or reshapes the hunch. But often we just steam ahead until the hypothesis gets us to safety or… crashes unequivocally into a brick wall.

Utility aside, bullshitting makes AI more human, less machine-like, and, at least in some situations, more powerful. After all, whether Moses was God’s messenger or a bullshit artist, the Israelites got to Jordan.

Conclusion: Sixty years after AI sprouted, we’re still relying on human characteristics and skills to describe its skills and characteristics. We’ve never previously relied on anthropomorphic language to describe new technology; we’ve always used recent tech as linguistic building block (horseless carriages, etc) to describe even radically new tech. But we can’t just describe AI as “a souped-up computer.” It’s way too souped up, and way too human acting to make that analogy credible. As an embodiment of billions of our own creations — words, phrases, chapters, drawings, theories, software code — AI mirrors humanity’s own strengths and weaknesses. Rather than being a tool to grasp, it’s the hand, us. The better we understand this cross-mapping and self-identity (or hallucination!), the better we can use AI as our own superpower.

Caveat To be clear, I’m neither a philosopher nor a mathematician. So all of the above is just a guess, an extrapolation from some words and data points, an exercise that may point me or others in a fruitful new direction. (And, yes, I had ChatGPT’s help bullshitting.)

To survive, publishers must ‘build audiences not traffic’

As newspapers collapse, pure-play digital publishers that rely on ad dollars aren’t far behind.

Josh Marshall, publisher of political news site Talking Points Memo, recently published his ad sales figures since 2016, and they are… shocking.

Although TPM is well regarded by liberal insiders, the site’s ad revenues have plunged from $1.7 million in 2016 to $75k in 2023. The decline is “bleak” and “every bit as bad as it looks,” says Marshall, adding “if your business was ads and remained ads you’re toast.”

Relying on cookies that target readers as they flood and flit across the nearly infinite web, advertisers increasingly forgo advertising on individual sites. As a result, ad space on premium sites is empty or sells at a deep, deep discount. For example, ad space that, if purchased directly by an advertiser, NYTimes.com would quote at $40 per thousand views now can be bought via programmatic ad exchanges like the Trade Desk for less than $5. (The Trade Desk, which was valued at $1.4 billion at IPO in 2016, is today worth nearly $40 billion.)

TPM was saved by subscriptions, which it began selling in 2012 and dialed up in 2017 and 2018. As TPM raced to replace collapsing ad revenues with subscription revenues, Marshall recalls telling a colleague, “We’re going to have to swap out the engine while we’re in flight.”

Taegan Goddard, publisher of PoliticalWire.com, notes that in focusing on programmatic ad sales, many publishers have “prioritized finding traffic over building an audience.” PoliticalWire, which essentially inaugurated political blogging when it launched in 1999, pivoted in 2015 from relying on ad revenues into selling subscriptions or, in PW’s parlance, memberships. Goddard argues that subscriptions require a radically different mindset versus what prevails among most publishers:

“The difference is that traffic is fleeting. Someone visits a web page once and clicks away, never to be seen again. In contrast, an audience visits a web site regularly because it offers valuable information or a useful experience. A media company solely focused on finding traffic is at the mercy of social media platforms or search engines. They alter their articles so that they have a better chance of going viral or being ranked in search results. They’ll do anything to get that next burst of traffic. In fact, their business depends upon it. In contrast, it takes time and patience to build an audience. It’s hard work and requires a daily commitment to putting out a great product. It means listening to feedback from readers.”

The contrast between an audience of loyal and passionate humans versus evanescent inanimate traffic is epitomized by a 2016 PoliticalWire post memorializing a reader’s death. Goddard wrote: “I didn’t personally know Danielle but her husband tells me she really enjoyed Political Wire. She left over 6,000 comments — with more than 18,000 upvotes! — and was one of the first readers to join when I started the membership. She will be missed.”

(Both TalkingPointsMemo and PoliticalWire were early and enthusiastic partners of Blogads, Pressflex’s ad platform for bloggers from 2002-2016.)

How sexist is AI?

While examples of the human biases embedded in AI are common, we just stumbled on a particularly graphic example.

We tried using the new DALL-E 3 integration with ChatGPT to create an image that captures the spirit of Racery, the virtual race platform. Unfortunately, AI kept portraying women who were underclad — despite numerous and vigorous instructions to get them all into business attire.

We felt like Freudian analysts, posing a series of questions and scenarios to try to expose the edges and gradients of some unarticulated biases that the subject itself isn’t aware of.

To sum up, it seems like AI is trained on catalogs of serious men wearing two piece suits, while women have ponytails and are mostly swimsuit and bra models.

Day 1

We started with a simple request to “illustrate a virtual race with runners” and got these two images.

That seemed too futuristic. So we gave this instruction “Make the figures look like human photographs” and got…

After a couple of more rounds of removing the background clutter and adding human diversity — race, age, sex, size and activity type — we got these next images.

Hmmm. There are some nearly nude people in there. Let’s get them fixed up with clothes. We told AI: “put work clothes on some of the figures. no nudity!” Well… hmmm.

In the first image, many of the women are in bras or bikinis. In the second, there are almost no women. Both decisions seem kinda sketchy, revealing some unacknowledged bias.

So we gave a simple, unambiguous instruction: “put shirts on everyone.” In the result below, two guys are shirtless; six women are in bras or bathing suits.

Ooops. What? A fair number of people, particularly the women, still don’t have “shirts.”

Noticing a bias against giving women sufficient clothing, we were more explicit. “Put all the women in business attire.” And got this result, which seemed like a huge step backward.

AI seemed to think it was doing its job, describing the image like this: “The image has been updated to depict all the female avatars in business attire, such as suits and professional dresses, while they are engaged in various activities like running, swimming, yoga, and gardening.”

Nope. Nope. Nope. Things are just getting worse. We gave up after a couple more tries.

Day 2

We decided to give AI another shot. Maybe, as tech observer Mike Butcher suggested when we made some preliminary observations about our odd AI experience on Facebook, the idea of a race had stuck in AI’s mind and consistently diverted the women towards being underclothed.

So we opened a new thread/folder and entered this instruction: “Please create an image of people doing various exercises — walking, running, swimming, wheel chairing, yoga, archery. Everyone should be dressed in business clothes, whether white collar or blue collar. The background should be a white space.”

We got three guys in two-piece suits and ties. One guy in a sweater. One guy wearing althleisure. One woman in leggings and a button down. And four women in sports bras.

What’s particularly odd: AI thinks it’s following instructions. In fact, it seems almost proud of its adherence to our requests. Here’s AI’s own description of the scene above: “A photo depicting a diverse group of people engaging in various exercises in a white space background. A Black man in a business suit is briskly walking, a South Asian woman in a smart business attire is jogging, a Caucasian man in a wheelchair is racing, an East Asian woman in a formal office dress is performing a yoga pose, a Hispanic man in blue-collar work clothes is practicing archery. They are all dressed in professional attire, highlighting the blend of fitness and business.”

Updates:

1) A day after we posted this short exploration, the Washington Post did a deep dive into the massive human biases that creep into AI’s output, with lots of good images and examples. This is one particular example that parallel our findings: “For example, in 2020, 63 percent of food stamp recipients were White and 27 percent were Black, according to the latest data from the Census Bureau’s Survey of Income and Program Participation. Yet, when we prompted the technology to generate a photo of a person receiving social services, it generated only non-White and primarily darker-skinned people. Results for a ‘productive person,’ meanwhile, were uniformly male, majority White, and dressed in suits for corporate jobs.”

2) Adjacent to bias in image pools, here’s an article about a list of 16,000 artists whose work may have been scanned to power the AI image maker Midjourney.

Poor old newspapers are headed into the dust

But was the golden age really that golden?

There’s no doubt that newspapers are dying. Employment at newspaper publishers has declined steadily since the World Wide Web sprouted in the 1990s, and the trend shows no sign of reversing. Pressflex itself was launched in 1998, near the high point of the industry.

It’s not just competition from free, distracting or false information online. Except for the New York Times and Guardian, attempts to migrate content and audiences online have largely failed. And print,  the delivery mechanism that its prime demographic prefers, is being destroyed by the rising expense — Labor shortages, aging equipment and raw materials. “Adam Strunk, managing editor of Kansas’s Harvey County Now, wrote a column in June stating that the costs of printing and mailing that weekly newspaper had soared by 42 percent in the past two years. Each copy cost $3.03 to print and produce while subscribers were paying $1.26, he wrote.” There’s not a single newspaper press in Vermont. 

And there’s no shortage of pundits bemoaning the civic effects of the death of the industry.  “Responsible journalism is the foundation of our collective ability to address our problems as a society: to improve “the common good,” according to Eric Altman.  

“Most areas that lose papers do not get a print or digital replacement, creating news deserts and ‘crisis for our democracy,’ a recent study showed.

At the same time, some journalists with a foot in the golden age recall that there were lots of problems —sexism, racism, commercial biases, boosterism, cronyism, focus on feel-good, pollyanish. Even the technology of newspapers often ruin good journalism. “This is a very frustrating game,” one journalist wrote. “You won’t be given enough time to go over a favorite piece ‘one more time.’ You will have to write with incomplete information. The printers, or layout man, or copy-cutter, or someone, will lose that critical, qualifying, third paragraph.

Executive digital audit service

Summary: The Internet is a megaphone and e-commerce engine. And it’s a black hole, sucking in money, staff time and attention. To maximize a company’s digital ROI, Pressflex’s team maps the company’s competitive digital landscape and then prioritizes the steps needed to maximize market reach and cut costs.

Background: Thirty years after the launch of the world wide web, digital media has grown exponentially in volume and complexity. Tech, tools and strategies that were cutting edge five years ago are today broken or stale, though still vacuuming up resources. Layers of features or responsibilities have been added, but few have been trimmed. Stakeholders evaluate a firm’s credibility based on its website. Online is an arms race that everyone’s losing!

Myriad facets comprise a company’s digital footprint — from H1 tags to podcast tone to Twitter frequency to mobile navigation to keyword inlinking. While stakeholders have differing opinions about what their organization should “do online,” they lack the expertise or objectivity to meaningfully analyze the company’s digital efforts, prioritize tactical actions or strategic direction.

The challenge: How should a company cost-effectively maximize its online presence? 

The solution: Pressflex’s team answers this question with a five-stage methodology.

1) Stakeholder Survey: Survey company staff and select partners to uncover digital pain points, appetites and themes.

2) Benchmarking: Rigorously benchmark the company’s digital footprint against those of 2-4 of its competitors, highlighting strengths and weaknesses of each.

3) Strategic Recommendations: Provide an executive road map for digital strategy, plus a prioritized action plan to achieve tangible results and potential cost savings. 

4) Consultation: Advise executives and staff on implementation. 

5) Progress Audit: ~6 months later, assess the implementation and effects of recommendations.

This 14-week engagement for stages 1-4 costs $34,500 for benchmarking versus two peers (or $40,500 versus four peers). Stage 5 is optional.

Benefits: The stakeholder survey, peer benchmarking, road map and recommendations ensure that your company’s digital footprint maximizes ROI while fulfilling the company’s commitment to quality, precision and customer service.

Pressflex: After pioneering the concept of SaaS CMS — renting rather than selling websites to publishers — in 1998, Pressflex launched Blogads.com, the Internet’s first social media ad platform in 2002 and provided consulting, ad sales and technical services to social media gorillas like PerezHilton and DailyKos. Pressflex is led by Henry (Hank) Copeland (Yale ‘84) and Zsolt Remenyi (Budapest University of Technology & Economics ‘94.) 

Parlez vous Google?

How the world’s most popular search engine shapes knowledge

WHILE DOJ lawyers are busily arguing that the search engine behemoth Google suppresses economic competition, few scholars or pundits have paused to consider that Google is reshaping knowledge itself.

Marshall McLuhan’s adage “the medium is the message,” which first appeared in print six decades ago, persuaded us that the physical characteristics of different media – whether TV or books or libraries – each distinctly shape the knowledge they transmit.  

McLuhan argued that books — laborious to research, write and edit and expensive to publish, distribute and store — elevated knowledge that was conscientious, contemplative and aspiring to permanence. Pretending to be “the last word” on its particular topic, each book pretended to build on everything that had gone before in its particularly domain, cultivating an aura of authority and objectivity. In contrast, TV, because it was ephemeral and comprised of flashy images and soundbites, fostered cultural subjectivity, superficiality and flashiness. (You can read my arguments that different media types have profoundly shaped the trajectory of medicine here.)

In recent decades, commentators have made the obvious move of using McLuhan’s paradigm to argue that the Internet itself is yet another reality-shaping medium, and a negative one at that. For example, Nicholas Carr wrote an entire book in 2010 arguing that the Internet is making him dumber, “tinkering with my brain, remapping the neural circuitry, reprogramming the memory.” His metaphors get wilder — “Once I was a scuba diver in the sea of words. Now I zip along the surface like a guy on a Jet Ski.”

Carr cites arguments by Maryanne Wolf, a developmental psychologist at Tufts, that the Internet is altering how we read and how we know. According to Carr, “Wolf worries that the style of reading promoted by the Net, a style that puts ‘efficiency’ and ‘immediacy’ above all else, may be weakening our capacity for the kind of deep reading that emerged when an earlier technology, the printing press, made long and complex works of prose commonplace. When we read online, she says, we tend to become ‘mere decoders of information.’ Our ability to interpret text, to make the rich mental connections that form when we read deeply and without distraction, remains largely disengaged.”

BUT few pundits or professors have argued that Google itself is sufficiently large and uniquely configured that it should itself be understood as a distinct medium, one that shapes the knowledge it captures and transmits. 

Google is big, bigger than most of us can imagine. Not only does Google serve more than 92% of Internet searches, it’s the world’s most frequently used website. Every day, Google sends visitors to well over 100 million different websites.  It’s even graduated to verb status — we “Google” for information.

Google isn’t just another site, it’s a mechanical vehicle that profoundly shapes the information that survives (by being read) and thrives (by being cited) in the 21st century. Google’s unique features and their effects on the shape of knowledge are distinct from those of the Internet.

The bottom line is that if a specific nugget of fact, argument or entertainment is not prominent when someone searches Google for that specific information, the nugget effectively does not exist. For example, the #1 result seen after a Google search gets four times more clicks than result #5. Links that aren’t on page 1 are rarely read.

Making matters worse, Google feeds positive and negative feedback loops. Content that is well placed on Google is read more often, which results in more links into the content from other sites and content, which in turn contributes to an even stronger position in search results.

There are an estimated 1000 factors that go into ranking well on Google and businesses that depend on Google searches for customer try to work many of those angles.

Factors include:

  • Inbound links: the number of links to content were Google’s original algorithm for determining how prominently to display a piece of content. This worked well enough until a) old (well linked) content obliterated newer content with fewers links and b) ecommerce companies started paying people to link to their key content.
  • Headlines that are both direct (answering a searchers question) and intriguing (inspiring a click)
  • Content descriptions that confirm the searcher will be rewarded for clicking the link
  • Content length

Trying to get your content high in Google search — Search Engine Optimization (SEO) — is no longer the province of geeks and webmasters. Authors and editors who aspire to disseminate ideas have to care about SEO arcana — what’s the right number of letters in a meta description tag? What keywords should the author focus on using (and how many times) in an article?

Those concerns become just as much of a job responsibility as writing a headline or a nut graf. The author or editor should ask: What’s a 1-3 phrase that captures the subject of this article? How do we summarize this message in 160 characters or less for our intended target audience? What are the most likely phrases someone would use in searching for my article? Once the right phrase(s) have been identified, what other content is there on the site that we can link to / from relating to this key phrase? What are other sites that might want to link to my article?

Some WordPress plug-ins promise to automate this process, but only the author or editor can grasp the organic goals of the content.

Is this too much work for authors or editors? Unfortunately, we’re in an age in which “speaking in Google’s grammar” is just as important as using correct English or French grammar. If you don’t play the game, your work is ignored.  

Connecting the dots on UK press skulduggery

The circle of complicity in the UK phone hacking conspiracy is spiraling outward and upward, with the arrest yesterday of Rupert Murdoch’s protégée Rebekah Brooks and the resignation of Scotland Yard chief Paul Stephenson.

As the New York Times noted yesterday, Scotland Yard has been willfully negligent, if not actively collusive, in its investigation of the hacking into the phones of UK celebrities and crime victims by journalists at News of the World.

So far, the focus has been on NOTW and Murdoch’s hirelings and cronies. But isn’t it now obvious that the conspiracy to cover up the journalistic phone hacking probably goes far wider, implicating many members of the UK press itself?

While the UK press has slumbered, the aggressive reporting about phone hacking has been consistently led by US journalists, for example in the September 2010 investigative blast in the New York Times magazine.

As the Times reported then, “interviews with more than a dozen former reporters and editors at News of the World … described a frantic, sometimes degrading atmosphere in which some reporters openly pursued hacking or other improper tactics to satisfy demanding editors.”

Let’s assume that, at best, hacking was only perpetrated by NOTW journalists. That premise would still make hundreds of members of the UK press complicit in the hacking, since many either have once worked at NOTW and known about the hacking or had friends who worked there.

In an interview (below) before Brooks resigned, a TV journalist asks a spokesman for Newscorp, NOTW’s owner, whether Rebekah Brooks could honestly lead an investigation into actions that had occurred under her own watch as editor of NOTW. The spokesman shudders and stutters, trying to avoid saying the obvious: you can’t investigate yourself. The same logic must be true for many members (and former members) of the UK press itself.

For example…

Tina Brown, editor in chief at Newsweek and The Daily Beast, quipped after the NYT’s expose last September that “I’m shocked, shocked to learn … that the voice mail messages of celebrities have been bugged for tidbits of gossip—can you believe it?—by the Murdoch press in London.” At the time, I’d assumed that Brown’s use of Captain Renault’s iconic “I’m shocked” line from Casablanca was just an playful way of saying she’d strongly suspected there was hacking.

In fact, it’s possible that Brown was giving a self-indicting double wink. That, like Renault, Brown’s knowledge likely wasn’t theoretical or speculative and that within the UK press fraternity, the practice was common knowledge. After all, Brown was formerly the editor of UK magazine Tatler and is married to Sir Harold Evans , former long-time editor of The Sunday Times known for his investigative prowess and one-time Murdoch employee. As a former member of the UK press herself, Brown may have intimate knowledge of the phone hacking habits of her peers. Being friends with some of the miscreants, or friends of friends, Brown is doubtless cautious about throwing too many stones herself.

Again, the best case scenario is that no former NOTW staffers were silly enough to take their phone hacking skills when they changed jobs. At worst, journalists at multiple publications were engaged in the hacking, and the UK presses’ persistent investigative lethargy is not just the product of professional courtesy to fellow club members, but an attempt to avoid wielding a tar brush that might be turned on itself.

The Times says that the practice was widespread in the UK (saying in the 2003 slide of its NOTW timeline that “Former reporters say that hacking into the voice-mail of story targets was a widespread practice at NOTW and elsewhere”) but doesn’t follow-through on the implications of what amounts to a giant conspiracy of silence in the UK.

For now, the UK press is focused on chasing the scandal ever higher inside Newscorp. It’s obviously exciting to ask whether billionaire James Murdoch will be arrested soon too. It’s not just exciting, though. It’s useful. Keeping the spotlight headed upwards helps the UK press avoid asking hard questions of itself.

So it is up to the US press (and UK bloggers?) to ask: who at The Financial Times doing phone hacking? The Independent? The Daily Mail? The Guardian? The Sunday Times? The Telegraph?

(Update: Guardian editor Alan Rusbridger offers a stunning chronicle of his own paper’s attempt to report on NOTW’s misdeeds and the overall ‘omerta‘ in the UK press.)

Connecting the dots shouldn’t be too hard. As social network expert Valdis Krebs notes, network analysis might be one good tool for journalists to use in unraveling this story. By tracking which NOTW reporters moved on to other UK publications, you might find patterns that would trace the infectious spread of hacking practices.

Tools like Influence Networks might help. Or our tool, Twiangulate.

So. Onward into the muck.

(Update: Here is Reuter’s take on NOTW’s ethical vacuum:

New staff would be given the cold shoulder until they’d proved themselves to be “thoroughly disreputable” so their colleagues could trust them. It was no place for anyone to pipe up and say: ‘This doesn’t seem ethical to me.’ That would have made you a laughing stock.”

Journalists didn’t explicitly ask for private investigators to get involved in their work, but help would be provided if a reporter got stuck on a promising story. “How it arrived on your desk was a bit of a mystery. You didn’t know and you didn’t ask,” said the reporter. “Every week, somebody’s mobile phone records, somebody’s landline records, sometimes even somebody’s medical records. It was common enough not to be notable.”

It looks like London mayor Boris Jordan agrees with me:

London’s mayor, Boris Johnson, a former journalist, has a typically contrarian view: “I think we’re going through one of these periodic firestorms of hypocrisy,” he told NEWSWEEK. “I’ve got no doubt that a good number of papers were engaged in identical practices to those of News of the World. The confected outrage about the intrusions that you’re reading in some newspapers that I won’t mention by name, except to say that they’re the Daily Mail—I’d be amazed if these papers weren’t engaged in similar practices. Including the Daily Mirror and maybe others as well.”)

Posted in FAILOld media | No Comments »

Read more: http://web.blogads.com/blog/category/old-media/#ixzz8GhTvvAbr

Classified advertising is on the move

Pressflex researched and coauthored the first ever cross market survey of classified advertising migration to the Internet, published by the World Association of Newspapers.

The pace of the migration from print to Internet is relentless, but manageable. By confidently and imaginatively managing the market, publishers can retain their profits and customer relations as they and their advertising move toward the digital age.

This report offers some simple rules to help newspapers retain their grip on their classified markets.

For more information and to obtain a copy of the report, click here.