Hacker News.

The Department of War has stated they will only contract with AI companies who accede to “any lawful use” and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Regardless, these threats do not change our position: we cannot in good conscience accede to their request.

It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider. Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place. Should the Department choose to offboard Anthropic, we will work to enable a smooth transition to another provider, avoiding any disruption to ongoing military planning, operations, or other critical missions. Our models will be available on the expansive terms we have proposed for as long as required.

  • pkjqpg1h@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    ·
    20 hours ago

    Did we read the same thing?

    We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.

    So they accept surveillance in other countries? What about other countries’ democratic values?

    Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

    So you don’t because it still sucks? But if it didn’t, you would?

    And what about legal?

    • Do Not Develop or Design Weapons???
    • Do Not Compromise Privacy or Identity Rights???

    I’ve really lost my faith in the US. They think they hold the power, but they’re missing the point: real power is built on trust-and we’re losing more of it every day.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 day ago

    I read somewhere that Anthropic has $18,000,000,000 in commitments from last year alone, so conceivably, they can stand to lose a mere $200,000,000 and it won’t create a huge issue for them in the short term.

    I hope that’s how they’re looking at it.

    • TheSeveralJourneysOfReemus@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      21 hours ago

      I read somewhere that Anthropic has $18,000,000,000 in commitments from last year alone, so conceivably, they can stand to lose a mere $200,000,000 and it won’t create a huge issue for them in the short term.

      How does one count that amount of anything, let alone money

      • el_abuelo@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        Start at 1 and work your way up in increments of 1.

        See you in about 100 years give or take a few decades.

    • criss_cross@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 hours ago

      It’s probably more they don’t wanna get blamed if AI launches missiles because the idiots in charge pressed shift+tab and yolo’d.

      Claude: “You’re right. I completely committed a war crime. I’m so very sorry! How would you like to proceed?”