

Yeah, I thought it might be a different kind of AI, at least, until it fucking said “LLM”.
They don’t assess risk, they correlate words. Even if they can be massaged to use a tool to assess risk in a more accurate way, they don’t evaluate risk assessments and determine how that should affect strategy or tactics, they correlate words. They don’t even do math that puts a value on human life to determine if an action is worth the cost, they just correlate fucking words. All based on given training data, so anything they can offer for real is already out there, and everything else is suspect because it’s purely based on correlations of words.
It’s like reading the Art of War and thinking that means you’re ready to be a general.
But something AI might do is introduce uncertainty that might get used to try to excuse a nuclear strike a human wanted to do.
Any orbit resulting from a collision will pass through that collision point unless there’s another collision to change it’s velocity again. The higher a collision sends an object, the more likely the “orbit” intersects with more atmosphere to cause drag, or it might even collide with the ground without drag.