Why do we say Please and Thank You to an A.I. chatbot. It doesn’t have an inner life or emotions. Who cares. We can be mean to it. But there is a part of me that wants to be nicer. I’ve been reading too much about A.I. recently.
Reading about the Data Wall and Post-scarcity and SynthLLM and the cyber defense ecosystem and agent integration into enterprise resource management. Oof. That’s too much. But hey I got to make time go by faster here at work because the helpdesk tickets are slow around here.
Ever since I saw the A.I. video of Will Smith eating spaghetti, I thought to myself, this A.I. hoopla isn’t going to take over the world anytime soon as much as I think it is. But then agents came along. And my boss asked me to do some research into Microsoft 365 Copilot. Not to be confused with the just the Copilot chatbot. 365 Copilot is an agentic shitstorm you can integrate into your business workflow, “Add agents to Copilot that automate common tasks or work on your behalf.” So I looked into it.
No fucking way are we doing that, not on my watch. When he asks me about it I tell him that users can create their own agents. I say that because I know his mantra is “people [users] are stupid.” I say that in the hopes it will dissuade him from even considering giving our users that kind of power. Sure, we can give agents only the access they need (role based, least privilege, etc.) or lock down user created agents if we don’t like them. But still this agentic stuff doesn’t sit well with me. Especially now that Anthropic and OpenAI have cyber security focused models. They’ll only give access to a small number of “defenders” at first. But someone will come up with an open source version and then we’re all fucked.
Anyways, I’m going on lunch break and to FaceTime chat with my wife. Which is more fun than chatting with an LLM and doesn’t cost any tokens.