Interesting Stuff from the Internet
Not Science Fiction: Harvard Scientists Have Developed an “Intelligent” Liquid (scitechdaily.com)
Harvard researchers at the John A. Paulson School of Engineering and Applied Sciences have developed a groundbreaking programmable metafluid that can adjust its properties, such as viscosity and optical transparency, in response to varying pressures. This new fluid class integrates small, air-filled elastomer spheres that alter the fluid's characteristics under pressure, enabling applications ranging from robotic actuators to dynamic shock absorbers and optical devices that can shift from clear to opaque. This innovative fluid, which can transition between behaving like a Newtonian and a non-Newtonian fluid, holds the potential for a wide array of fields and uses. The research, published in "Nature," highlights the versatility and scalability of this metafluid, offering significant advancements in materials science and mechanical engineering.
Thinking like Transformer (srush.github.io)
"Thinking Like Transformers" is a conceptual and educational exploration of transformer mechanics, particularly focusing on their computational models rather than just their architectural overview. This exploration is presented through a unique computational framework and a programming language called RASP (and its Python variant RASPy), developed to simulate transformer-like operations through discrete computation.
The webpage discusses how programming with RASPy, which mimics transformer operations, can help users understand transformers more intuitively. It includes a detailed walkthrough of coding examples that implement basic transformer functionalities such as flipping an input sequence, managing inputs, applying feed-forward networks, and effectively using attention mechanisms.
The content is structured to gradually introduce the transformer concept from a coding perspective, starting with basic operations and moving towards more complex functionalities like attention mechanisms and their applications in tasks like summing neighboring values or computing sequence lengths.
But what is a GPT? Visual intro to transformers | Chapter 5, Deep Learning (youtube.com)
What was it like when oxygen killed almost all life on Earth? - Big Think
The article from Big Think discusses the Great Oxygenation Event, a crucial period in Earth's history when the rise of oxygen in the atmosphere nearly wiped out all existing life. This event occurred around 2.3 to 2.0 billion years ago, during a time known as the Huronian Glaciation or "Snowball Earth," when the entire surface of the planet was frozen. The increase in atmospheric oxygen was primarily due to the activity of cyanobacteria, a type of photosynthetic microorganism that began producing oxygen about 2.7 billion years ago. As a waste product of photosynthesis, oxygen gradually accumulated in the atmosphere, eventually reaching levels that caused severe environmental changes, including a global freeze. This prolonged ice age lasted about 300 million years, posing extreme survival challenges to the existing life forms. However, some organisms managed to endure these harsh conditions, and when the ice eventually melted, Earth's biosphere emerged vastly different, setting the stage for further evolutionary developments. The article compares this natural phenomenon to modern scenarios where organisms, including humans, might face similar threats due to environmental changes caused by their activities, such as pollution and resource depletion. The historical context serves as a warning and a lesson about the potential self-destructive outcomes of altering one's environment too drastically or quickly.
AI Has Lost Its Magic (msn.com)
The text reflects on the author's initial fascination with generative AI and its subsequent normalization in their daily life. Initially, AI, like ChatGPT, provided a novel and entertaining way to create whimsical and creative content, which the author found more engaging than traditional forms of entertainment, like watching Netflix. This phase was marked by generating unique and imaginative responses, such as poems in the style of Hart Crane about ice cream sandwiches or creating fictional histories for products.
However, the author's use of AI shifted from these creative exploits to more practical and mundane tasks over time. The novelty of AI's capabilities began to wear off as its utilitarian functions took precedence. This included using AI for academic administration, market research, and website creation, which, while useful, did not provide the same level of delight as the earlier creative applications.
This evolution mirrors a common trajectory of technological adoption, where initial excitement and novelty give way to integration into everyday tasks, eventually becoming mundane and taken for granted. The author notes that while the enchantment with AI's creative potential has faded, its practical utility continues to impact various aspects of professional and personal life, reflecting the broader societal integration and reliance on AI technologies.
Snippets from the Newsletters/ Newspapers/ Books
“When I am asked what I worry about in the market, the answer usually is “nothing”, because everyone else in the market seems to spend an inordinate amount of time worrying, and so all of the relevant worries seem to be covered. My worries won’t have any impact except to detract from something much more useful, which is trying to make good long-term investment decisions” - Bill Miller
Two close boyhood friends grow up and go their separate ways. One becomes a humble monk, the other a rich and powerful minister to the king. Years later they meet. As they catch up, the portly minister takes pity on the thin and shabby monk. Seeking to help he says: “You know, if you could learn to cater to the king, you wouldn’t have to live on rice and beans.”
To which the monk replies: “If you could learn to live on rice and beans, you wouldn’t have to cater to the king.” - J L Collins
Jill Lepore
Most of what once existed is gone. Flesh decays, wood rots, walls fall, books burn. Nature takes one toll, malice another. History is the study of what remains, what’s left behind, which can be almost anything, so long as it survives the ravages of time and war: letters, diaries, DNA, gravestones, coins, television broadcasts, paintings, DVDs, viruses, abandoned Facebook pages, the transcripts of congressional hearings, the ruins of buildings. Some of these things are saved by chance or accident, like the one house that, as if by miracle, still stands after a hurricane razes a town. But most of what historians study survives because it was purposely kept—placed in a box and carried up to an attic, shelved in a library, stored in a museum, photographed or recorded, downloaded to a server—carefully preserved and even catalogued. All of it, together, the accidental and the intentional, this archive of the past—remains, relics, a repository of knowledge, the evidence of what came before, this inheritance—is called the historical record, and it is maddeningly uneven, asymmetrical, and unfair.In the history of the world, most of the people who have ever lived either did not know how to write or, if they did, left no writing behind, which is among the reasons why the historical record is so maddeningly unfair.
The psychologist Dean Keith Simonton has found that one notable attribute that distinguishes high performers is volume:
“A small percentage of workers is responsible for the bulk of the work...the top 10% of the most prolific elite can be credited with 50% of all contributions, whereas the bottom 50% of least productive workers can claim only 15%...the most productive contributor is about 100 times more prolific than the least.”
Byrne Hobart:
“You will not learn anything of lasting importance from TV, movies, podcasts…they’re junk food. Successful people converge on 3 ways to learn: lots of reading time, some exercises and projects, and conversations with people who are slightly ahead of them.”
Janan Ganesh:
"People are willing to do almost anything other than read at length...At the same time, no one relishes being ignorant or incurious...One way of squaring these opposing impulses is to give things that aren’t books the intellectual status of books."
Tim Ferriss:
“Almost every idea that you have is downstream from what you consume. When you choose who to follow on Twitter, what book to read, what podcast to listen to, you’re choosing your future thoughts.”
SITALWeek
a) One Shark Jump to Singularity
In his book Scale, theoretical physicist and Santa Fe Institute faculty member Geoffrey West (we are huge fans of West here at NZS Capital!) uses the math of finite time singularities to illustrate the nature and increasing pace of change. According to West: "A finite time singularity simply means that the mathematical solution to the growth equation governing whatever is being considered—the population, the GDP, the number of patents, et cetera—becomes infinitely large at some finite time…This is obviously impossible, and that’s why something has to change." Essentially, as things grow exponentially in a system with open-ended growth like our economy, you reach a point where you need an innovation phase/paradigm shift to keep a system from collapsing. (We discussed this idea that there must be some sort of intervention to redirect the growth trajectory in a bit more detail with a visual (excerpted from Scale) in this paper we wrote a few years ago, and see also West’s discussion in this lecture clip from five years ago). This framework is obviously relevant to describing innovation in the tech sector. For the last six decades, we’ve experienced a steady progression of compute power growing exponentially as governed by Moore’s Law. The chronology of this progression is roughly: mainframes, PCs, servers, dotcom, smartphones, cloud computing, “big data” analytics, and, now, AI (see AI Is the New Dotcom for a deeper discussion). Each one of these overlapping eras represents some sort of platform or phase shift along exponential technology growth curves that stack onto one another. The catch in West’s progress curve model is that you have to move to the next exponential curve before the prior one mathematically collapses. And, with each new curve, we reach the collapse point more quickly than the last, requiring more rapid phase shifts. It certainly feels like technological changes are accelerating, although it’s hard to distinguish what’s objectively real from the always-on, 24/7 barrage of hyped media we’ve been living with for over two decades. Yet, the phase shift to AI (which is another way of saying that computer chips have finally caught up to the capabilities of the human brain), does feel like it’s happening with unprecedented pace. And, already, reports of OpenAI/Microsoft’s $10B AI data center and rumored $100B AI data center cluster suggest the sort of massive, multi-trillion-dollar investments that will be necessary to usher in the next phase of ultra-advanced AI, perhaps around the end of this decade. We haven’t even fully wrapped our heads around the current AI, and already we need to look to the future – to what is often described as the hypothetical Kurzweilian concept of the Singularity, where AI surpasses us and humans no longer reign supreme in the Milky Way Galaxy.
The stakes have risen with each technology platform shift over the last six decades, just as they’ve risen every century since the Renaissance and the Scientific Revolution. The current pace is exasperating, and part of me wouldn’t mind a little break, so to speak. West’s concept of finite time singularities is mathematical, while Kurzweil’s “singularity” is conceptual and means something entirely different. West referred to Kurzweil’s view of the technological singularity as “untethered” in a recently updated version of his Scale talk. West also quotes John von Neumann, who is considered the father of the technological singularity concept (which, again, is disparate from West’s mathematical concept of open-ended growth collapse), from 1954: “The ever accelerating progress of technology…gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Perhaps not coincidentally, von Neumann made that statement the same year the first silicon transistor was invented. Whereas West and many others might take issue with various theories of the technological singularity, the event of AI surpassing humans might be the only next-in-line solution to the ever increasing pace of phase transitions that West’s model of finite time singularities calls for. Based on our current trajectory, AI tech will only continue to get bigger and better. Thus, the technological singularity appears to be the inevitable next phase shift in our future (even if it arrives without some of Kurzweil’s farfetched prognostications). However, beyond the fuzzy event horizon of the technological singularity, we’d be relying on AI to keep jumping the shark, because, at that point, human innovation will be unable to keep sufficient pace to create the next big thing. Effectively, in West’s model of progress, humans are close to taking their hands off the wheel. Each successive jump to avoid the collapse of open-ended growth is far more costly than the last. It’s unclear what type of economy or size of capital investment would be required to keep the pace going beyond the point of computers becoming more capable than humans.
b) AI Bubble Index
Above, I named the various phases of technological phase/platform shifts since the 1960s. Each of these step functions involved a period of bubble investing (to varying degrees) across the hardware/infrastructure and application/services layers. Currently, we are somewhere in the middle of the AI bubble where the long-term promise and market size is something to salivate over, but the near term will bring some degree of correction before we can get back on a steadier path of value creation. It’s of course impossible to make a precise call on when a bubble will pop, and (fortunately) we’re not in that business at NZS Capital. Rather, we strive to assemble a portfolio with a combination of Resilience and optionality focused on long-term growth opportunities. So, while we don’t enjoy the post-pop vertical drops, we do our best to navigate them. There’s a theory that if everyone says it’s a bubble, then it must not be one because, surely, everyone is rational and not everyone can be the greater fool. For example, there was a lot of bubble talk about SaaS and cloud software a little over a decade ago. And, it turned out not to be much of a bubble (with a few isolated exceptions).
There are a few things that I think are needed for a real, large-scale market bubble. Some of them we are starting to see, but some of them haven’t arrived yet. A really big bubble, like the dotcom market rally in the late 90s (I started working professionally in the stock market in 1998), pulls nearly every asset class and sector into it. Every company in 1999 somehow had a dotcom strategy and valuation, no matter how little they had to do with the Internet (aside: it’s rather ironic that some of the biggest dotcom bubble stocks were the ones most threatened by the Internet in the end!). Today, by analogy, if you saw a taco joint declare that it’s an AI company, that might be a red flag. The WSJreports that Yum Brands chief digital office “has a vision for ‘AI-powered’ fast-food in which artificial intelligence shapes nearly every aspect of how its Taco Bell, Pizza Hut, KFC and Habit Burger Grill restaurants are run.” Or, maybe, if you happen to see a steel manufacturer acquire a supplier of AI data-center equipment, that might be a flag. Well, maybe we're a bit further into this bubble after all. By the end of a market-wide bubble, every company in the market has some narrative tied to the bubble du jour. Another marker is when you see irrational IPO activity, with lines of black SUVs outside of investment firm offices clamoring for the latest money-losing FOMO stock. This carbonated cavalcade has yet to materialize for AI. I’d posit that future bubbles will arrive faster and be shorter than bubbles of the past. I can’t really back that sentiment up beyond instinct and experience, but, perhaps, like West’s finite time singularities, necessity will dictate that bubble cadence increase and duration shorten, allowing the system to keep on keeping on.
Billy Oppenheimer
a) Little Did I Know You Were Just Reading My Tongue
Throughout the early part of his professional tennis career, Andre Agassi could never beat a player named Boris Becker. In particular, Agassi struggled to return Becker’s serve. “His serve was something the game had never seen before,” Agassi explains. After yet another loss to Becker in the semifinals of the 1988 Indian Wells Open, Agassi writes, “I promise myself I won’t lose to him the next time we meet.” He wasn’t entirely sure how he’d make good on that promise, but he began to watch film of Becker, obsessively studying his serve. “And I started to realize,” Agassi said, “he had this weird tick with his tongue. I’m not kidding. He would go into his rocking motion, and just as he was about to toss the ball, he would stick his tongue out. It would either be right in the middle of his lip, or it would be to the left corner of his lip.” If Becker stuck his tongue out over the middle of his lip, he would serve the ball up the middle. If he put it to the side, he would serve the ball to the side. After he learned the way Becker revealed himself in his little tongue tick, Agassi said, “The hardest part wasn’t returning his serve. The hardest part was not letting him know that I knew this. I had to resist the temptation of reading his serve for the majority of the match, and instead, choose the moments when I was going to use that information on a given point to execute a shot that would allow me to break the match open.” Of the next 11 matches between the two, Agassi won 10 of them. After Becker retired in 1999, over a beer, Agassi said to Becker, “By the way, did you know you used to do this with your serve?” Agassi said, “He about fell off the chair. And then he said, ‘I used to go home all the time and tell my wife, it’s like he reads my mind! Little did I know you were just reading my tongue.’”
b) The Sign Of Those That Don’t Experience Their Own Product
In an interview with the restaurateur Will Guidara, the artist and podcast host Debbie Millman makes a short, little, passing comment about how, at her local branch of a popular frozen dessert franchise, the plastic spoons are jagged and hard on the mouth. After she then asks one of her prepared questions (which was unrelated to the spoon comment), Guidara says he will answer the question, but before doing so, he wants to say something about what the spoon reveals. “That is the sign of a company with leaders or executives that don’t take the time to experience their own product. Or when they’re tasting their soft serve, they’re doing so in their nice office with a normal spoon, as opposed to experiencing it in the way that the people they’re serving are experiencing it. Because, while those are small details, if you actually experience them, it’s so obvious.”
c) The Ear and Toenail School
The great connoisseur of Italian Renaissance art Bernard Berenson made a fortune on his ability to authenticate paintings. In the late 19th, early 20th century, no one knew what was a Michelangelo or a Raphael or a Leonardo Da Vinci painting. To figure out who did what, Berenson borrowed a technique from a Swiss anatomy teacher named Giovanni Morelli. Berenson’s biographer says it became known as the “ear and toenail school.” Berenson found that, since a good painter could mimic the broader subjects and strokes of a master, the surest way to classify paintings was to look where the painter thought the lay observer wouldn’t look. That true mastery was revealed in the tiny details. Take a Madonna—amateur painters, Berenson realized, tended to focus on perfecting prominent features, such as Mary’s face, and neglect smaller elements like Jesus’ toenails. In assessing paintings, Berenson ignored the obvious focal points that any decent painter could replicate, and instead, he inspected the easy-to-overlook details. It is there that painters reveal their level of craft, technique, and skill.
d) The Big Choices Are Not Very Revealing
A Harvard psychologist once asked Amos Tversky why he became a psychologist. “It’s hard to know how people select a course in life,” he said. “The big choices we make are practically random. The small choices tell us more about who we are. Which field we go into may depend on which high school teacher we happen to meet. Who we marry may depend on who happens to be around at the right time of life. On the other hand, the small decisions are very systematic. That I became a psychologist is probably not very revealing. What kind of psychologist I am may reflect deep traits.”
Cities such as San Francisco and Chicago are trying to save their downtown office districts from spiraling into a doom loop. St. Louis is already trapped in one. As offices sit empty, shops and restaurants close and abandoned buildings become voids that suck the life out of the streets around them. Locals often find boarded-up buildings depressing and empty sidewalks scary. So even fewer people commute downtown. This self-reinforcing cycle accelerated in recent years as the pandemic emptied offices. St. Louis’s central business district had the steepest drop in foot traffic of 66 major North American cities between the start of the pandemic and last summer, according to the University of Toronto’s School of Cities. Traffic has improved some in the past 12 months, but at a slower rate than many Midwestern cities.