1) Some research from about a year ago on the personality traits that predict interest in EA. The finding isn’t very surprising, but the whole thing is worth a read: We found two psychologically distinct moral factors that predict whether someone is a proto-EA. First, such a person is particularly willing to give away personal resources (time or money) to help others in need, even if they are far away. Second, such a person is focused on effectiveness when allocating their altruistic resources. We found that both expansive altruism and effectiveness-focus significantly predicted positive attitudes and interest towards effective altruism in non-EA samples. And we found that both scales predict stronger identification with effective altruism in pre-existing effective altruists. In short, both the E and the A are required to make someone a proto-EA, but only few people score highly on both.
24: I don’t know anything about “Internet safety,” but I am an “””AI professional,””” ie, familiar with the technical details of cutting edge AI models, and I share your bearishness on AI safety research. I’m not sure if “Internet safety” is a good analogy? They’re very different phenomena, at the end of the day. Curious to see what others say.
25: I have done a fair amount of online learning but I have found that, at least for programming (is that what you’re asking about?) it’s much more productive to just suffer through a project and try to figure it out. You learn so much that way, and if you started with a course, you would end up suffering through your first project anyway. For instance, when I was getting started with Python I downloaded a bunch of weather data from the NOAA and made a bunch of visualizations of weather in the US and tried to quantitatively characterize different sub-climates. It was really hard and I was googling constantly, but I learned more doing that than I did in any course.
I (and some of my friends) really enjoyed the Kaggle course for Python — it was a great intro to programming and CS. Also, it's free. (https://www.kaggle.com/)
24: "If there weren’t, was it possible to do anything useful if someone had thought about doing this?"
I'm no expert, just brainstorming:
I think there are a couple of things to consider here:
1. Were the "safety problems" with the internet possible to predict?
2. Would it have been possible to act on any predicted problems, given the decentralized nature of the internet?
We'd have to start by defining some of the problems. Off the top of my head:
a) polarization driven by social media bubbles
b) scammers and viruses
c) sexual predators
d) availability of age-inappropriate materials
e) an ad-driven attention economy providing perverse incentives for companies and their algorithms
That'll do to start with.
b, c and d seem easily predictable but aren't actionable pre-internet (at least, not in a way that solves the problems any better than reactive solutions do).
a and e seem more like the kind of thing it would have been useful to predict because they are things that we stumbled into and it is now very difficult or impossible to change course. They also seem possible to predict if someone thought about it enough. They seem sort of analogous to AI safety problems in that any solutions require a coordinated response at the user, company or government level. I'm skeptical that those solutions would have been implemented because you'd need people en-masse to be aware of the potential problems and then to put pressure on companies or governments that outweighed the pressure going the other way.
Still, I would have liked to know someone was thinking about it early. They may have been able to generate enough of a push back early enough that we could have recognized the problems starting to form and have a more cohesive movement to counter it.
So my guess is that AI Safety is kind of analogous to Internet Safety in this way, but I don't think pre-internet Internet Safety would have definitely been doomed.
Although misaligned AI is probably the most concerning threat facing humanity, I'm a bit less pessimistic about it than others. I think your "internet safety" analogy can, at least on some level, provide useful insights. I was born before the internet (if you consider its start when ARPANET adopted TCP/IP) and spent a good chuck of my life without it, but we still had to address many of the same issues, and there were many analogous technologies that helped us prepare for what the internet would be like. For example, we used modems to connect to bulletin boards and the experience wasn't that much different from the early internet. Even though we don't know exactly what the AI systems of tomorrow will be like, the systems of today will probably provide useful insights.
Also, thank you for linking to my blog!
Hey man, thanks for linking to my blog! I appreciate the shout-out, and I'm a fan of your writing as well 🙂
Some real slammers in that playlist of yours - so many that I’m definitely going to have to listen to the ones I don’t know. I guess you’re a fan of Numero Records?
100:1 odds that someone else could make an anti-Sweden post. E.g., their economic downturn in 2020 was greater than Norway, Finland, or Denmark, with a lot more excess deaths.
I liked Learning How to Learn.
Something substack wise has changed making it more difficult to like & comment here, outside of that, thanks for sharing your current interests. Often relate AI to Pandora's box, as to relate to the video of internet anger, pointless sloganeering & blame for politicians is a near constant in all chambers, self research & opinion often seems to get met with a subhuman approach, politics seems ideologically placed above the human, when in review it is nothing more then a set of tools on the floor. Anyhow.