Anthropic is more worried about AI than actual people. And, by losing its copyright lawsuit this December, it could potentially undermine the entire race to build AGI.
First of all, I think it's kind of a moot point to worry about the question who's responsible enough to build AGI. For very similar reasons I believe than the ones you lay out as reasoning for why we won't have to worry about AI welfare and/or consciousness anytime soon.
IF there is any reasonable way to speak about AGI, then it isn't really a big deal to begin with... I don't know if you've ever read this https://www.aisnakeoil.com/p/agi-is-not-a-milestone article here, but it's the first one so far that convinced me to treat AGI as something different than a mere advertisement term. (I guess that would just move the goalposts and make us worry about ASI and just shift the entire debate to that term? Really, anything that would be able to usher in the singularity, no?)
Secondly, given the current state of affairs in the US, the reliance of the US economy on datacenters, the influence of the tech community on the current president, the state of the stock market minus the magnificent seven and the entirely corrupt status quo, I also don't think that we need in any case to waste many thoughts on THIS being the straw that breaks the camel's back. In the highly unlikely case that the courts indeed give a firm ruling that would harm the industry as a whole I absolutely suspect that the entire matter would be declared a national security matter and overruled on that grounds, no?
I do see a potential bubble burst sometime soon, but I absolutely do not expect it to come from the direction of the courts.
This being said, love the expanding on AI welfare, reminds me on the entire debate on which life can and cannot be grieved (re:Butler), but I think I have to meditate a little further on that. Thanks in any case.
"I absolutely suspect that the entire matter would be declared a national security matter and overruled on that grounds." This seems plausible to me. Trump has made clear that US companies need to stay ahead of Chinese ones, so it's possible that his administration might intervene. This did occur to me while writing this yesterday, but chose not to include it to keep the article readable. That said, excellent thought -- thanks so much for sharing this!! Perhaps I should have said something about it after al. Really appreciate it!
If you have a look at Karp’s vision of aligning Washington and the Valley more closely and the general direction this all is taking… I can’t speak for Switzerland where I am currently residing but at least in Germany you also have a massive investment Wave incoming with the Made jn Germany initiative… two tendencies we got incoming right now are an increasing reliance upon AI(less so its actual capabilities rather than the capital it is swallowing) while simultaneously an increasingly clear picture of enterprises that will not get profitable anytime soon. We just had Trump grabbing Intel with 10% and striking this weird NVIDIA 15% deal. Also we are heading a cliff with the neoliberal order and it’s fairly obvious that we are going straight into the authoritarianism route. Historically fascism first made its way as corporatism and isn’t this eerily similar? We are building infrastructure here that needs to basically be decoupled from valuation proper. And this seems like a venue where the state can step in. So, yeah, if you ever want to expand on that part I am keen to read it. Also already looking into this but so far this all still seems in the “tendencies and Note taking” territory but I feel like this is the route we are marching down.
This is great. One other thought I had on the bs concern about AI models feeling pain is that it’s a statement that corporate property is worthy of the same moral value as actual human beings. Just another volley in the tech-right’s effort to destroy the belief in fundamental human equality and the sanctity of human life.
First of all, I think it's kind of a moot point to worry about the question who's responsible enough to build AGI. For very similar reasons I believe than the ones you lay out as reasoning for why we won't have to worry about AI welfare and/or consciousness anytime soon.
IF there is any reasonable way to speak about AGI, then it isn't really a big deal to begin with... I don't know if you've ever read this https://www.aisnakeoil.com/p/agi-is-not-a-milestone article here, but it's the first one so far that convinced me to treat AGI as something different than a mere advertisement term. (I guess that would just move the goalposts and make us worry about ASI and just shift the entire debate to that term? Really, anything that would be able to usher in the singularity, no?)
Secondly, given the current state of affairs in the US, the reliance of the US economy on datacenters, the influence of the tech community on the current president, the state of the stock market minus the magnificent seven and the entirely corrupt status quo, I also don't think that we need in any case to waste many thoughts on THIS being the straw that breaks the camel's back. In the highly unlikely case that the courts indeed give a firm ruling that would harm the industry as a whole I absolutely suspect that the entire matter would be declared a national security matter and overruled on that grounds, no?
I do see a potential bubble burst sometime soon, but I absolutely do not expect it to come from the direction of the courts.
This being said, love the expanding on AI welfare, reminds me on the entire debate on which life can and cannot be grieved (re:Butler), but I think I have to meditate a little further on that. Thanks in any case.
"I absolutely suspect that the entire matter would be declared a national security matter and overruled on that grounds." This seems plausible to me. Trump has made clear that US companies need to stay ahead of Chinese ones, so it's possible that his administration might intervene. This did occur to me while writing this yesterday, but chose not to include it to keep the article readable. That said, excellent thought -- thanks so much for sharing this!! Perhaps I should have said something about it after al. Really appreciate it!
If you have a look at Karp’s vision of aligning Washington and the Valley more closely and the general direction this all is taking… I can’t speak for Switzerland where I am currently residing but at least in Germany you also have a massive investment Wave incoming with the Made jn Germany initiative… two tendencies we got incoming right now are an increasing reliance upon AI(less so its actual capabilities rather than the capital it is swallowing) while simultaneously an increasingly clear picture of enterprises that will not get profitable anytime soon. We just had Trump grabbing Intel with 10% and striking this weird NVIDIA 15% deal. Also we are heading a cliff with the neoliberal order and it’s fairly obvious that we are going straight into the authoritarianism route. Historically fascism first made its way as corporatism and isn’t this eerily similar? We are building infrastructure here that needs to basically be decoupled from valuation proper. And this seems like a venue where the state can step in. So, yeah, if you ever want to expand on that part I am keen to read it. Also already looking into this but so far this all still seems in the “tendencies and Note taking” territory but I feel like this is the route we are marching down.
This is great. One other thought I had on the bs concern about AI models feeling pain is that it’s a statement that corporate property is worthy of the same moral value as actual human beings. Just another volley in the tech-right’s effort to destroy the belief in fundamental human equality and the sanctity of human life.