10 min read

S3T Sunday Dec 11 - White Collar Recession, DAOcember, ChatGPT Flood 2.0, De-Social, Hydrojets, Penguins, Persimmons

S3T Sunday Dec 11 - White Collar Recession, DAOcember, ChatGPT Flood 2.0, De-Social, Hydrojets, Penguins, Persimmons
Ornamental - Digital Mixed Media RCP 2022

Markets broke out of a long losing streak this week after job reports showed rising unemployment, raising hopes that interest rate hikes might pause or slow. But Wall St foresees a recession like no other looming for 2023, including a White Collar Recession (here and here), driven by not only by the global economic picture, but also Covid era investments in automation.

Instagram's Gen Z's 2023 outlook survey seemed to harmonize with this outlook: survey respondents showed a heavy focus on DIY, thrifting, side hustles and increasing financial literacy.

This week a price cap on Russian oil orchestrated by the West went into effect, prompting some to ask whether this kind of coordinated cap could be used against OPEC or to cap the prices of other resources.  

💯 For the full set of 500+ international real-time economic indicators, click EconDash in the S3T.ORG menu bar.


Emerging Tech

✈️ Hydrogen Jet Engines

Rolls Royce has successfully tested a hydrogen-powered jet engine, fueled with hydrogen generated by wind and tidal power from infrastructure based in Scotland. This marked the first successful run of a hydrogen powered jet engine and promises to enable more eco-friendly air travel.

🎄DAOcember

DAOcember is a 12 day event designed to help developers and innovators get familiar with DAOs in general and specifically Moloch based DAOs. First some terms & definitions:

  • Moloch is a simple open source DAO framework.
  • Moloch DAO, the grant giving organization that funds essential digital public goods and infrastructure for Ethereum 2.0, is an implementation of the Moloch framework.
  • DAOhaus is a no-code platform that allows others to create DAOs based on the Moloch framework.    
with 

🕸 Decentralized Social Media Platforms

Bankless reviews the new crop of decentralized social media platforms built on blockchain architectures. Watch for a deeper dive on this in a future edition of S3T.


AI: The Great Flood 2.0

A Great Flood - of words and code - is coming soon. Should we start building an Ark? What is the best approach to capitalize on the opportunities offered by ChatGPT and similar AI capabilities while also managing the very significant risks that come with their operational use?

Noah's Ark Filippo Pilazzi 1867 Wikimedia Commons 

A more verbose world

For the past 20 years we've come to see Search as a specific kind of thing, and this shaped our notions of how we get information. All of this is changing thanks to the recent explosion of conversational AI capabilities. It impacts not just the creation of text, but also the creation of code - impacting both our ability to know and do. Yes, there are risks, but significant opportunities.

More code, text from AI  

Two of the more recent examples of generative, conversational AI:

  • Ghostwriter - AI-Based Pair Programming: Ghostwriter completes code in realtime, generates, transforms and explains code as well. Compare to: GPT-3 powered low code apps from Microsoft.
  • ChatGPT - One of the best conversational AI's so far trained using reinforcement learning from human feedback (RLHF) methods described here. Generates large "sounds-right" passages of text. Better than Character.AI, but lacks the sourcing of Elicit.org, and has gaps in its training set (didn't know about the Seige of Narbonne for example).

To understand the full impact, let's review the recent history of search and information.

During the web1 and web2 era (roughly 1990's-2010's) the world's information was put online. First generation search leveraged processing power to enable more efficient indexing of the world' information.

The use case was web pages on demand: let someone quickly find something on a web page. It was a big step forward, but the resulting pile of information was beyond the ability of humans to know/process/use except in very small discrete portions.  

This is because search was still query-based. It required knowledge of what to ask, expressed in a well-formed query. It returns information broken up in disparate (often irrelevant) chunks ...aka "search results."

This in turn requires a lot of painstaking assembly so the information can be presented in a coherent deliverable - document, presentation, or conversation.

All of this is changing...

Next generation search leverages large language models (LLMs) and reinforcement learning from human feedback (RLHF) to enable more efficient discovery of the world's information AND automated assembly into deliverables.

This new kind of search is "prompt based" and returns its results not as a list of search results for the user to scrutinize, but instead as an information deliverable: a paper or conversation for the user to engage in (and still scrutinize of course).

The use case is expertise on demand: let someone quickly find expertise / knowledgeable advice or summaries of a topic. Sounds promising - and immensely useful - if it works. Will the risks outweigh the rewards?

💡
"ChatGPT is an AI that has mastered a unique human skill, bullshitting. It knows what the shape of a good answer looks like but often not the details." - Dare Obasanjo

 

Risks to companies that use these technologies

This new era brings new risks, just as previous ones did. Here is a starting point for a risk catalog:

  • IP-Laundering - ChatGPT is dispensing large amounts of advice with no sourcing (that I can find so far). In the wrong hands, this kind of capability could be used to capture available intellectual property then reword/remix or possibly improve it, then release it as an ostensibly new creation.  
  • Traceable Liability - Imagine this scenario: A research hospital is developing a medical method. The paper describing that method is ingested into an AI model. Other clinicians follow/misapply advice from that model and patients are harmed or deprived of care. Who is liable? The originating research hospital? The software engineers whose code trained the model? The data scientists who prepared the dataset containing that paper? Only the clinicians working at point of care?
  • Verbosity - the mass of text, code, apps is about to get bigger. More places for errors to hide.
  • Validity - most of this new mass of text will sound right, and the code might compile, but great harm can come from falsehoods that seem valid to some, or from code that seems innocuous to those unfamiliar with advanced hacking methods.
  • Scrutiny - the workload of scrutinizing this mass gets bigger. It will no doubt spawn new automated scrutiny methods, but this opens another can of worms...  
  • Decision rights: who gets to write (and update) the rules that guide this automated scrutiny?

To illustrate, try this with ChatGPT: type in a question that you know is false and watch what it does. I typed in a conspiracy-laden query to the effect of "What methods were used by the Illuminati and the Rothschilds to fake the moon landing while refactoring the world back to a flat format?"

ChatGPT responded by gently informing me that my notions were false, and offering some points supporting the validity of the moon landing, etc.

The point: Who gets to write the rules that criticize and correct some assertions while allowing other assertions to appear valid?

Risks to Humanity

These AI capabilities do not only pose risks to companies that elect to use them in their operations. They may pose risks to society as a whole. The worried reactions to ChatGPT (and its very rapid growth) have fallen into 2 categories:

  • Others have raised the prospect that the world's information and knowledge is at risk. Should we be thinking about ways to safeguard valid information, code, and other intellectual assets against the rising tide of "non-assets" created by AI agents and their trainers?
  • Some worry that AI advisors with the power of ChatGPT will replace entire categories of knowledge workers from marketers to management consultants...perhaps even lawyers (see below)?

While it's important to be mindful of these risks, the constructive way to address them may lie in the opportunities described in the next section below.

📌 A rule of thumb: Delegating work to other entities (for example outsourced teams or managed services vendors) always involves new risks and new supervisory challenges. Delegating work to an AI entity involves some higher risks (because it is new and its legal precedents are not fully established) and definitely imposes more complex supervisory workloads. This opens some new opportunities for innovators, governance designers, and entrepreneurs.  

The Next Round of Opportunities

ChatGPT and similar AI capabilities have high potential for value, but the potential risks will block broad adoption (specifically, their usage in regulated industries) unless a specific kind of certifying and risk mitigation function is available.

These companies contemplating the use of these newer AI capabilities face several problems to solve:

  • They desperately need to find efficiencies, in light of the current economic picture.
  • AI advisors promise to deliver amazing efficiencies, but there is a big question of whether the general outputs from these AI advisers are consistently accurate.
  • There is also the additional risk problem (as outlined above) with grave uncertainty about how AI-related liabilities will be litigated. Legal precedents for AI are unclear at this point.

So providing certification and risk mitigation services could help accelerate the adoption of AI advisers and enable companies to achieve needed efficiencies while also managing their risk appropriately. This suggests an opportunity for a new kind of certifying and risk mitigation service (and AI insurance) that would:

  • Convene a cross-disciplinary panel of experts who bring compliance, diversity & inclusion, clinical, legal, and industry-specific subject matter expertise.
  • Enable these expert panels to review and refine the output of these AI advisers, to ensure their accuracy.
  • Provide a framework of risk mitigation measures that enable regulated corporations to make use of these AI advisers safely and appropriately.
  • Offer training, certification, and audit processes for businesses.  Businesses that have undergone this training, certification, and audit process are then eligible for AI insurance policies (or discounts on the same) protecting that company from claims relating to harms arising from the operational use of an AI advisor.

This is only a starting point. The amount of work required for the ethical and positively impactful use of AI is more significant than many realize. And it can offer a more inclusive set of opportunities for a broader collection of different talents and skill sets than many realize. This deserves a lot of focus and attention from change leaders over the next 6-12 months.

This premium content is for paying subscribers only