Hi, this is Wayne again with a topic “Slack may be training its AI off of your messages l TechCrunch Minute”.
Okay I’ll admit it, I don’t always read the terms and conditions, but last week slack showed us why? Maybe we should be a little bit more diligent in a post on The Hacker News forums, someone who actually read the privacy policy rang the alarm Bell on one suspicious section, which basically says that slack can use your data to train its AI. Yes, the Salesforce owned platform is learning from how often you use the party parot emote, but what’s especially Sinister about this, is that you’re opted in by default. So if you don’t want slack lurking around your channels, then you need to email, a specific address to opt out. So that means, if you’re just hearing about this now, then you could be feeding the AI without even knowing it there’s a lot of confusion around what Slack’s privacy policy actually means and what slack is actually doing, but that’s part of the problem. Why can’t we just have a clear-cut answer to what they’re doing with our data slack told TechCrunch? That quote, we do not build or train these models in such a way way that they could learn memorize or be able to reproduce some part of customer data and on threads one machine learning engineer at slack Aaron marer said we don’t train or fine-tune llms on Your messages, if that’s what you’re worried about and yeah that is, in fact what I’m worried about, but this privacy policy literally says that slack systems develop Ai and ml models by analyzing consumer data, which includes messages, content and files.
So, even if slack isn’t currently using our messages to train its models, it very well can moer did concede that this privacy policy was written before slack launched its slack AI product. But you can’t do that: you’re, not a seed stage, startup, making boneheaded mistakes you’re part of a company with a 277 billion market cap. Hopefully slack gets its act together and clears this all up, but this whole ordeal points to why consumers are so squeamish about AI. In the first place, pretty much all of us have already had our data used to train AIS without our consent, some of the most popular llms are trained with models that are basically just the entire internet and listen. I’Ve posted a lot of things on the internet. That’S why we see visual artists, especially angry about AI, because their work is being used to train this technology and then that technology in turn is threatening their livelihoods. Hopefully, companies can learn from Slack’s mistakes and see that user privacy shouldn’t be an afterthought. People just want to know how their data is being used.
Is that too much to ask .