Microsoft has updated the terms of service going into effect at the end of September and is clarifying that its Copilot AI services should not be used as a replacement for advice from actual humans.

AI-based agents are popping up across industries as chatbots are increasingly used for customer service calls, health and wellness applications, and even doling out legal advice. However, Microsoft is once again reminding its customers that its chatbots responses should not be taken as gospel. “AI services are not designed, intended, or to be used as substitutes for professional advice,” the updated Service Agreement reads.

The company specifically referred to its health bots as an example. The bots, “are not designed or intended as substitutes for professional medical advice or for use in the diagnosis, cure, mitigation, prevention, or treatment of disease or other conditions,” the new terms explain. “Microsoft is not responsible for any decision you make based on information you receive from health bots.”

The revised Service Agreement also detailed additional AI practices that are explicitly no longer allowed. Users, for example, cannot use its AI services for extracting data. “Unless explicitly permitted, you may not use web scraping, web harvesting, or web data extraction methods to extract data from the AI services,” the agreement reads. The company is also banning reverse engineering attempts to reveal the model’s weights or use its data “to create, train, or improve (directly or indirectly) any other AI service.”

“You may not use the AI services to discover any underlying components of the models, algorithms, and systems,” the new terms read. “For example, you may not try to determine and remove the weights of models or extract any parts of the AI services from your device.”

Microsoft has long been vocal about the potential dangers of generative AI’s misuse. With these new terms of service, Microsoft looks to be staking out legal cover for itself as its AI products gain ubiquity.

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20