• 10 Posts
  • 157 Comments
Joined 2 years ago
cake
Cake day: July 19th, 2023

help-circle

  • They (or the LLM that summarized their findings and may have hallucinated part of the post) say:

    It is a fascinating example of “Glue Code” engineering, but it debunks the idea that the LLM is natively “understanding” or manipulating files. It’s just pushing buttons on a very complex, very human-made machine.

    Literally nothing that they show here is bad software engineering. It sounds like they expected that the LLM’s internals would be 100% token-driven inference-oriented programming, or perhaps a mix of that and vibe code, and they are disappointed that it’s merely a standard Silicon Valley cloudy product.

    My analysis is that Bobby and Vicky should get raises; they aren’t paid enough for this bullshit.

    By the way, the post probably isn’t faked. Google-internal go/ URLs do leak out sometimes, usually in comments. Searching GitHub for that specific URL turns up one hit in a repository which claims to hold a partial dump of the OpenAI agents. Here is combined_apply_patch_cli.py. The agent includes a copy of ImageMagick; truly, ImageMagick is our ecosystem’s cockroach.


  • Now I’m curious about whether Disney funded Glaze & Nightshade. Quoting Nightshade’s FAQ, their lab has arranged to receive donations which are washed through the University of Chicago:

    If you or your organization may be interested in pitching in to support and advance our work, you can donate directly to Glaze via the Physical Sciences Division webpage, click on “Make a gift to PSD” and choose “GLAZE” as your area of support (managed by the University of Chicago Physical Sciences Division).

    Previously, on Awful, I noted the issues with Nightshade and the curious fact that Disney is the only example stakeholder named in the original Nightshade paper, as well as the fact that Nightshade’s authors wonder about the possibility of applying Glaze-style techniques to feature-length films.


  • The author also proposes a framework for analyzing claims about generative AI. I don’t know if I endorse it fully, but I agree that each of the four talking points represents a massive failure of understanding. Their LIES model is:

    • Lethality: the bots will kill us all
    • Inevitability: the bots are unstoppable and will definitely be created in the future
    • Exceptionalism: the bots are wholly unlike any past technology and we are unprepared to understand them
    • Superintelligent: the bots are better than people at thinking

    I would add to this a Plausibility or Personhood or Personality: the incorrect claim that the bots are people. Maybe call it PILES.



  • Fundamentally, Chapman’s essay is about how subcultures transition from valuing functionality to aesthetics. Subcultures start with form following function by necessity. However, people adopt the subculture because they like the surface appearance of those forms, leading to the subculture eventually hollowing out into a system which follows the iron law of bureaucracy and becomes non-functional due to over-investment in the façade and tearing down of Chesterton’s fences. Chapman’s not the only person to notice this pattern; other instances of it, running the spectrum from right to left, include:

    I think that seeing this pattern is fine, but worrying about it makes one into Scott Alexander, paranoid about societal manipulation and constantly worrying about in-group and out-group status. We should note the pattern but stop endorsing instances of it which attach labels to people; after all, the pattern’s fundamentally about memes, not humans.

    So, on Chapman. I think that they’re a self-important nerd who reached criticality after binge-reading philsophy texts in graduate school. I could have sworn that this was accompanied by psychedelic drugs, but I can’t confirm or cite that and I don’t think that we should underestimate the psychoactive effect of reading philosophy from the 1800s. In his own words:

    [T]he central character in the book is a student at the MIT Artificial Intelligence Laboratory who discovers Continental philosophy and social theory, realizes that AI is on a fundamentally wrong track, and sets about reforming the field to incorporate those other viewpoints. That describes precisely two people in the real world: me, and my sometime-collaborator Phil Agre.

    He’s explicitly not allied with our good friends, but at the same time they move in the same intellectual circles. I’m familiar with that sort of frustration. Like, he rejects neoreaction by citing Scott Alexander’s rejection of neoreaction (source); that’s a somewhat-incoherent view suggesting that he’s politically naïve. His glossary for his eternally-unfinished Continental-style tome contains the following statement on Rationalism (embedded links and formatting removed):

    Rationalisms are ideologies that claim that there is some way of thinking that is the correct one, and you should always use it. Some rationalisms specifically identify which method is right and why. Others merely suppose there must be a single correct way to think, but admit we don’t know quite what it is; or they extol a vague principle like “the scientific method.” Rationalism is not the same thing as rationality, which refers to a nebulous collection of more-or-less formal ways of thinking and acting that work well for particular purposes in particular sorts of contexts.

    I don’t know. Sometimes he takes Yudkowsky seriously in order to critique him. (source, source) But the critiques are always very polite, no sneering. Maybe he’s really that sort of Alan Watts character who has transcended petty squabbles. Maybe he didn’t take enough LSD. I once was on LSD when I was at the office working all day; I saw the entire structure of the corporation, fully understood its purpose, and — unlike Chapman, apparently — came to the conclusion that it is bad. Similarly, when I look at Yudkowsky or Yarvin trying to do philosophy, I often see bad arguments and premises. Being judgemental here is kind of important for defending ourselves from a very real alt-right snowstorm of mystic bullshit.

    Okay, so in addition to the opening possibilities of being naïve and hiding his power level, I suggest that Chapman could be totally at peace or permanently rotated in five dimensions from drugs. I’ve gotta do five, so a fifth possibility is that he’s not writing for a human audience, but aiming to be crawled by LLM data-scrapers. Food for thought for this community: if you say something pseudo-profound near LessWrong then it is likely to be incorporated into LLM training data. I know of multiple other writers deliberately doing this sort of thing.






  • Linear no-threshold isn’t under attack, but under review. The game-theoretic conclusions haven’t changed: limit overall exposure, radiation is harmful, more radiation means more harm. The practical consequences of tweaking the model concern e.g. evacuation zones in case of emergency; excess deaths from radiation exposure are balanced against deaths caused by evacuation, so the choice of model determines the exact shape of evacuation zones. (I suspect that you know this but it’s worth clarifying for folks who aren’t doing literature reviews.)



  • I don’t have any experience writing physics simulators myself…

    I think that this is your best path forward. Go simulate some rigid-body physics. Simulate genetics with genetic algorithms. Simulate chemistry with Petri nets. Simulate quantum computing. Simulate randomness with random-number generators. You’ll learn a lot about the limitations that arise at each step as we idealize the real world into equations that are simple enough to compute. Fundamentally, you’re proposing that Boltzmann brains are plausible, and the standard physics retort (quoting Carroll 2017, Why Boltzmann brains are bad) is that they “are cognitively unstable: they cannot simultaneously be true and justifiably believed.”

    A lesser path would be to keep going with consciousness and neuroscience. In that case, go read Hofstadter 2007, ‘I’ is a strange loop to understand what it could possibly mean for a pattern to be substrate-independent.

    If they’re complex enough, and executed sufficiently quickly that I can converse with it in my lifetime, let me be the judge of whether I think it’s intelligent.

    No, you’re likely to suffer the ELIZA Effect. Previously, on Awful, I’ve explained what’s going on in terms of memes. If you want to read a sci-fi story instead, I’d recommend Watts’ Blindsight. You are overrating the phenomenon of intelligence.




  • I’m going to be a little indirect and poetic here.

    In Turing’s view, if a computer were to pass the Turing Test, the calculations it carried out in doing so would still constitute thought even if carried out by a clerk on a sheet of paper with no knowledge of how a teletype machine would translate them into text, or even by a distributed mass of clerks working in isolation from each other so that nothing resembling a thinking entity even exists.

    Yes. In Smullyan’s view, the acoustic patterns in the air would still constitute birdsong even if whistled by a human with no beak, or even by a vibrating electromagnetically-driven membrane which is located far from the data that it is playing back, so that nothing resembling a bird even exists. Or, in Aristoteles’ view, the syntactic relationship between sentences would still constitute syllogism even if attributed to a long-dead philosopher, or even verified by a distributed mass of mechanical provers so that no single prover ever localizes the entirety of the modus ponens. In all cases, the pattern is the representation; the arrangement which generates the pattern is merely a substrate.

    Consider the notion that thought is a biological process. It’s true that, if all of the atoms and cells comprising the organism can be mathematically modeled, a Turing Machine would then be able to simulate them. But it doesn’t follow from this that the Turing Machine would then generate thought. Consider the analogy of digestion. Sure, a Turing Machine could model every single molecule of a steak and calculate the precise ways in which it would move through and be broken down by a human digestive system. But all this could ever accomplish would be running a simulation of eating the steak. If you put an actual ribeye in front of a computer there is no amount of computational power that would allow the computer to actually eat and digest it.

    Putting an actual ribeye in front of a human, there is no amount of computational power that would allow the human to actually eat and digest it, either. The act of eating can’t be provoked merely by thought; there must be some sort of mechanical linkage between thoughts and the relevant parts of the body. Turing & Champernowne invented a program that plays chess and also were known (apocryphally, apparently) to play “run-around-the-house chess” or “Turing chess” which involved standing up and jogging for a lap in-between chess moves. The ability to play Turing chess is cognitively embodied but the ability to play chess is merely the ability to represent and manipulate certain patterns.

    At the end of the day what defines art is the existence of intention behind it — the fact that some consciousness experienced thoughts that it subsequently tried to communicate. Without that there’s simply lines on paper, splotches of color, and noise. At the risk of tautology, meaning exists because people mean things.

    Art is about the expression of memes within a medium; it is cultural propagation. Memes are not thoughts, though; the fact that some consciousness experienced and communicated memes is not a product of thought but a product of memetic evolution. The only other thing that art can carry is what carries it: the patterns which emerge from the encoding of the memes upon the medium.


  • He very much wants you to know that he knows that the Zizians are trans-coded and that he’s okay with that, he’s cool, he welcomes trans folks into Rationalism, he’s totally an ally, etc. How does he phrase that, exactly?

    That cult began among, and recruited from, a vulnerable subclass of a class of people who had earlier found tolerance and shelter in what calls itself the ‘rationalist’ community. I am not explicitly naming that class of people because the vast supermajority of them have not joined murder cults, and what other people do should not be their problem.

    I mean, yes in the abstract, but would it really be so hard to say that MIRI supports trans rights? What other people do, when those other people form a majority of a hateful society, is very much a problem for the trans community! So much for status signaling.


  • This is a list of apostates. The idea is not to actually detail the folks who do the most damage to the cult’s reputation, but to attack the few folks who were once members and left because they were no longer interested in being part of a cult. These attacks are usually motivated by emotions as much as a desire to maintain control over the rest of the cult; in all cases, the sentiment is that the apostate dared to defy leadership. Usually, attacks on apostates are backed up by some sort of enforcement mechanism, from calls for stochastic terrorism to accusations of criminality; here, there’s not actually a call to do anything external, possibly because Habryka realizes that the optics are bad but more likely because Habryka doesn’t really have much power beyond those places where he’s already an administrator. (That said, I would encourage everybody to become aware of, say, CoS’s Fair Game policy or Noisy Investigation policy to get an idea of what kinds of attacks could occur.)

    There are several prominent names that aren’t here. I’d guess that Habryka hasn’t been meditating over this list for a long time; it’s just the first few people that came to mind when he wrote this note. This is somewhat reassuring, as it suggests that he doesn’t fully understand how cultural critiques of LW affect the perception of LW more broadly; he doesn’t realize how many people e.g. Breadtube reaches. Also, he doesn’t understand that folks like SBF and Yarvin do immense reputational damage to rationalist-adjacent projects, although he seems to understand that the main issue with Zizians is not that they are Cringe but that they have been accused of multiple violent felonies.

    Not many sneers to choose from, but I think one commenter gets it right:

    In other groups with I’m familiar, you would kick out people you think are actually a danger or you think they might do something that brings your group into disrepute. But otherwise, I think it’s a sign of being a cult If you kick people for not going along with the group dogma.








  • Thanks! You’re getting better with your insults; that’s a big step up from your trite classics like “sweet summer child”. As long as you’re here and not reading, let’s not read from my third link:

    As a former musician, I know that there is no way to train a modern musician, or any other modern artist, without heavy amounts of copyright infringement. Copying pages at the library, copying CDs for practice, taking photos of sculptures and paintings, examining architectural blueprints of real buildings. The system simultaneously expects us to be well-cultured, and to not own our culture. I suggest that, of those two, the former is important and the latter is yet another attempt to coerce and control people via subversion of the public domain.

    Maybe you’re a little busy with your Biblical work-or-starve mindset, but I encourage you to think about why we even have copyright if it must be flaunted in order to become a skilled artist. It’s worth knowing that musicians don’t expect to make a living from our craft; we expect to work a day job too.