Discussion about this post

User's avatar
Ged's avatar

First of all, I think it's kind of a moot point to worry about the question who's responsible enough to build AGI. For very similar reasons I believe than the ones you lay out as reasoning for why we won't have to worry about AI welfare and/or consciousness anytime soon.

IF there is any reasonable way to speak about AGI, then it isn't really a big deal to begin with... I don't know if you've ever read this https://www.aisnakeoil.com/p/agi-is-not-a-milestone article here, but it's the first one so far that convinced me to treat AGI as something different than a mere advertisement term. (I guess that would just move the goalposts and make us worry about ASI and just shift the entire debate to that term? Really, anything that would be able to usher in the singularity, no?)

Secondly, given the current state of affairs in the US, the reliance of the US economy on datacenters, the influence of the tech community on the current president, the state of the stock market minus the magnificent seven and the entirely corrupt status quo, I also don't think that we need in any case to waste many thoughts on THIS being the straw that breaks the camel's back. In the highly unlikely case that the courts indeed give a firm ruling that would harm the industry as a whole I absolutely suspect that the entire matter would be declared a national security matter and overruled on that grounds, no?

I do see a potential bubble burst sometime soon, but I absolutely do not expect it to come from the direction of the courts.

This being said, love the expanding on AI welfare, reminds me on the entire debate on which life can and cannot be grieved (re:Butler), but I think I have to meditate a little further on that. Thanks in any case.

Expand full comment
2 more comments...

No posts