Sam Altman's Warning: Why You Shouldn't Use ChatGPT for Critical Matters!

In an era where Generative AI like ChatGPT plays an increasing role in daily life, a warning directly from its creator, Sam Altman, CEO of OpenAI, is something worth listening to. In a podcast with Theo Von, Sam Altman provided two crucial reasons why we should carefully consider before relying on ChatGPT for everything.
Not Suitable for "High-Stakes" Situations
Sam explicitly stated that he would not trust ChatGPT in making decisions for "high-stakes" matters or those involving significant risk, such as medical diagnoses. He once said, "I really do not want to trust my medical fate to ChatGPT with no human doctor in the loop." This underscores that no matter how intelligent AI becomes, it still has limitations and cannot replace human judgment in critical situations.
Data Privacy Might Not Be 100% Secure
Altman's second point concerns personal data. OpenAI retains all user conversations, whether general discussions or sensitive personal stories. And importantly, he stated, "there are no legal protections forcing the company not to disclose that information." This means that if a court order is issued, OpenAI might be compelled to reveal what you've discussed with ChatGPT. This is a major concern for anyone looking to use AI for highly private consultations.

So, What Should We Do?
Altman's advice is to consider using Large Language Models (LLMs) that run on your personal computer (running on your PC), rather than relying on cloud-based AI chatbots (the kind we use by opening the AI provider's website). This approach is recommended if you prioritize data privacy.
Conclusion
Sam Altman's warning doesn't mean ChatGPT is bad or shouldn't be used at all. Instead, it serves as a reminder to us to be aware of the limitations and potential risks that may arise, especially when dealing with personal data and high-impact matters. Understanding these constraints will help us use AI technology more wisely and securely.