Jeffrey Ladish Profile picture
Sep 18, 2020 19 tweets 3 min read Read on X
I often get the question, "why does it matter where COVID-19 came from?"

I can see two main reasons. One is we need to know to prevent future pandemics. If SARS-CoV-2 jumped from a bat to an intermediate host to a human, then we want to know everything we can about that process
Spillovers don't happen randomly. Humans and specific kind of animals, usually mammals or birds, come into contact in ways just right for a virus to be able to jump from one species to another.
It's common for viruses to jump from animals to humans temporarily, but usually when they do they aren't good at transmitting from human to human. We need to understand the circumstances where viruses successfully evolve human to human transmission.
In the case of SARS-CoV-2, there are a lot of questions. Were miners / farmers / others regularly exposed to relatives of the virus in Yunnan? How much of the population before 2020 tested positive for antibodies to SARS-like coronaviruses?
What kinds of animals might SARS-CoV-2 be able to infect? Cats and mink can be infected & probably transmit among themselves; are there other animals for which this is true?
How likely is it that SARS-CoV-2 was spreading person to person before Wuhan? How would we be able to detect that, now and in the future? Should countries make agreements with each other to share data about emerging pandemics?
How likely is SARS-CoV-2 to be directly engineered? Passaged through lab animals or tissue culture? How would we know? Are there or can there be safety procedures in place that would prevent gain of function research from being done without an audit trail?
How can China, the US, and the rest of the world cooperate on the investigation of pandemic origins given their respective political incentives? Or if not now, how could systems be set up to incentivize them / enable them to cooperate in the future?
I've observed in the nuclear security world that political leaders often want to deescalate and avoid the risk of an extremely costly war to both sides but feel limited in their ability to cooperate by their political incentives, specifically their base wanting them to look tough
This brings me to the second reason I think figuring out the origins of the COVID-19 pandemic is important. We, as a global civilization, need to figure out a *process* for investigating big potential and actual catastrophes and a method for preventing these
Basically, we lack robust methods of collaboration around issues important to every country. We need systems set in place during "peacetime" that will be robust to tenser political environments during and after a crisis.
The work to understand how COVID-19 happened is practice for future work where the stakes may be even higher. For example, largescale geoengineering projects would require a much greater capacity to reach consensus on difficult scientific questions
As the population grows and technology improves, we can do a lot more. Our power level is improving, but we cannot anticipate the effects of the our power increasing...
For example, if you stepped in a time machine and went to 2018 with the genome of SARS-CoV-2... No one would be able to predict what the virus would do if it were to spread among the human population without actually releasing
You might try to use animal models to study its transmissibility, and separately use different animal models to study the disease severity, but both would be difficult and the answers would not be precise. This is true regardless of whether the virus was natural or engineered.
Human populations are essentially ecosystems for pathogens, and we lack the modeling ability to understand much at all about how a new species will fare in an ecosystem. This is dangerous when paired with organisms that might have unusually good fitness.
This is both metaphorically and literally true. As a metaphor, a species is a new technology and the ecosystems are human societies. Introduce algorithmically mediated social media and you can't predict what will happen, either its rate of growth nor first & second order effects
And it's literally true of new organisms we might create, whether this is a novel virus or a new kind of broccoli. This isn't to say we have *no* predictive power when it comes to the effects of something very new. A new broccoli is unlikely to damage its environment.
The the more powerful / fit a new species / technology is, the harder it will be to determine the effects of that species / technology.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jeffrey Ladish

Jeffrey Ladish Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @JeffLadish

Jun 19, 2023
I really appreciate that @RishiSunak is explicitly acknowledge the existential and catastrophic risks faced by AI. To have a competent global response we have to start here

Also, accelerating AI development ⏩ is probably the single most dangerous thing you can do in the world
We're at a pivotal point in time where we have just begun to make AI systems that actually learn and reason, in more and more general ways

This is the beginning of the transition from human cognitive power to AI cognitive power. We have to figure out how to survive this 🔀
The challenge is not that the UK will fall behind. The challenge is not that we won't be able to figure out how to actually build really powerful AI systems that can help solve our other global challenges

The challenge is surviving the creation of systems smarter than us
Read 17 tweets
Jun 15, 2023
People often think AI systems will become kinder or more moral as they get smarter. Indeed as language models have become more capable, they have become nicer and better behaved

Unfortunately, there are strong reasons to think that this niceness is shallow not deep
The key question when thinking about future AI systems is whether good behavior is driven by some underlying aligned goal set or whether it's driven by proxy goals that do not generalize, e.g. "get humans to think I'm good and helpful"
Ultimately, the only way to make "safe" superintelligent systems is to make systems that deeply care about the same things humans deeply care about. If they only care about appearing helpful or getting a good test score, they will not be safe
Read 17 tweets
May 22, 2023
OpenAI just wrote up their plans for how they would like to develop superintelligent AI, and why they think we can't stop development right now.

I'd summarize their approach as "let's proceed to superintelligence with global oversight

openai.com/blog/governanc…
First off, it's absolutely wild this is where we're at. The leading AI company in the world is publicly saying they want to build superintelligence in the near future.

Let that sink in
I really do appreciate their openness in this. We could be living in a world where they plan to do this in secret and I much prefer the present world. This is absolutely an urgent conversation we need to have as a global community
Read 8 tweets
May 21, 2023
The more compute we build, the more fuel for an intelligence explosion. I think this is fairly straightforward. Once humans could make industrial amounts of food, it didn't take long to expand to billions. With AI, the expansion will be much, much faster Image
With humans, there weren't huge amounts of food just lying around ready to be eaten. But there was a huge amount of land that could be quickly converted to farmland at scale. And humans quickly converted it, greatly increasing food supply and ultimately the human population
Once AI systems develop agency, and the instrumental convergent drives, they will find ways to utilize all the world's compute resources. This could be as simple as many copies running inference. Or it could be one huge training run or (more likely) something more efficient
Read 4 tweets
May 10, 2023
AI proliferation makes us all less safe. Seems like a good thing to prevent, and also a pretty difficult challenge!

I would not be that surprised if a state actor managed to get ahold of the OpenAI's frontier models in the next year or two
I really hope this doesn't happen (to OpenAI or any other lab). I've worked hard to try to prevent this form happening.

But also I think it's reasonably likely to happen without substantial effort. I'd give 50/50 odds that a frontier model will be stolen in the next two years
Read 4 tweets
May 10, 2023
There is an idea that it's especially valuable to go slow when strong AGI is very close because this is when you'll get the best empirical feedback on your alignment research

The AI systems might be smart enough to be quite useful to study but not so smart as to take control
I think this basically correct, and we're currently at that point where we should hit the breaks. Here's why:

1) We're close enough that there's a real chance we could stumble upon strong AGI at any time
2) We're close enough to do lots of useful empirical alignment work
The main reasons I think #1 is because I think there's a big agency overhang and a big compute overhang. If these systems were a little smarter and a lot more agentic, I expect we'd quickly lose control.

Scaling laws haven't broken, we don't know what GPT-5 will be able to do
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(