Whenever AI is mentioned lots of people in the Linux space immediately react negatively. Creators like TheLinuxExperiment on YouTube always feel the need to add a disclaimer that “some people think AI is problematic” or something along those lines if an AI topic is discussed. I get that AI has many problems but at the same time the potential it has is immense, especially as an assistant on personal computers (just look at what “Apple Intelligence” seems to be capable of.) Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete. Using an AI-less desktop may be akin to hand copying books after the printing press revolution. If you think of specific problems it is better to point them out and try think of solutions, not reject the technology as a whole.
TLDR: A lot of ludite sentiments around AI in Linux community.
yeah i see that too. it seems like mostly a reactionary viewpoint. the reaction is understandable to a point since a lot of the “AI” features are half baked and forced on the user. to that point i don’t think GNOME etc should be scrambling to add copies of these features.
what i would love to see is more engagement around additional pieces of software that are supplemental. for example, i would love if i could install a daemon that indexes my notes and allows me to do semantic search. or something similar with my images.
the problems with AI features aren’t within the tech itself but in the surrounding politics. it’s become commonplace for “responsible” AI companies like OpenAI to not even produce papers around their tech (product announcement blogs that are vaguely scientific don’t count), much less source code, weights, and details on training data. and even when Meta releases their weights, they don’t specify their datasets. the rat race to see who can make a decent product with this amazing tech has made the whole industry a bunch of pearl clutching FOMO based tweakers. that sparks a comparison to blockchain, which is fair from the perspective of someone who hasn’t studied the tech or simply hasn’t seen a product that is relevant to them. but even those people will look at something fantastical like ChatGPT as if it’s pedestrian or unimpressive because when i asked it to write an implementation of the HTTP spec in the style of Fetty Wap it didn’t run perfectly the first time.
The Linux community has never been of one mind on anything. We have always been against, and for, everything.
Some distro or project will integrate AI, or not, and it will be forked. And then forked again.
Many AI models are run on Linux. Linux won’t be left behind in any real sense. Linux won’t lose market share over this.
Linux developers paid by AI firms will integrate it into products. Those that volunteer will make their own decisions.
I think conceptually AI is very useful and interesting, and as a general technical thing. But when we start talking about OpenAI and others, their methods for data collection, respect of licenses etc is where I (and I believe others) take issue
I agree. Openai have sold everything they supposedly stood for.
AI may be useful in some cases (ask Mozilla) but it is not like what you said in the middle part of your post. Seeing the vote rate makes me feel a tiny bit better about this situation.
Testing AI (knowledge system) was the first job out of college for me in the '90s (I used to be a programmer). I’m not against it, but I don’t like it in my feet either. I like using the operating system all by myself, or generating things on my own. Especially now that I’m an artist, I like painting on paper. I even dislike digital art (I find it flat), let alone generative art.
That’s easy, move over to Windows or Mac and enjoy. I’ll stay in my dumb as Linux distros, thank you.
The AI in my head is a bit underpowered but it gets the job done
Same as mine. But mine also gets confused regularly, and it gets worse with every new version (age) 🤣🤣
I think most of the hostility is in regards to shilling of certain sites and services. Local self hosted AI is not likely to get as much flack I feel. Another aspect of hate is people generating images and calling it art, which…it is but, it’s the microwave equivalent of art. Such negative sentiments can be remedied by actually doing artistic shit with whatever image they generate, like idk, put the image into Photoshop and maybe editing the image in a way that actually improves it, or using said image as a canvas to be added onto or some other shit.
I’d call it realistic, not concerning.
Fair enough…
Ok. Tell me how A.I have made your life better so far
Using “AI” has been beneficial for example to generate image descriptions automatically, which were then used as alternative text on a website. This increased accessibility AND users were able to use full text search on these descriptions to find images faster. Same goes for stuff like classification of images, video and audio. I know of some applications in agriculture where object detection and classification etc. is used to optimize the usage of fertilizer and pesticides reducing costs and reducing environmental impact they cause. There are ofcourse many more examples like these but the point should be clear.
I think we should be chasing all the trendy trends to become competitive with the competition. That’s the only way to push those numbers up (that need to be pushed up). That’s how a winner wins.
But does Linux have to “win”? And if so what they “wins”?
The prize of the competition is what the competitors compete for. There’s a prize and the winner gets it; the loser doesn’t get it.
Why is this so hard to understand? I guess it’s nature’s way of weeding out the losers.
So what’s the prize for Linux desktop would get? For for-profit cooperation, that’s market share and revenue. Yet, as far as I concern, most Linux desktop doesn’t chase market share, nor earns revenue.
It’s to out-compete the competitors so as not to become obsolete. … also I hope you’re aware that I’m saying all of this ‘ironically’, to poke fun at the mental gymnastics in the OP’s post.
Oh. I get it now.
I’m not against AI. I’m against the hoards of privacy-disrespecting data collection, the fact that everybody is irresponsibility rushing to slap AI into everything even when it doesn’t make sense because line go up, and the fact nobody is taking the limitations of things like Large Language Models seriously.
The current AI craze is like the NFTs craze in a lot of ways, but more useful and not going to just disappear. In a year or three the crazed C-level idiots chasing the next magic dragon will settle down, the technology will settle into the places where it’s actually useful, and investors will stop throwing all the cash at any mention of AI with zero skepticism.
It’s not Luddite to be skeptical of the hot new craze. It’s prudent as long as you don’t let yourself slip into regressive thinking.
…this looks like it was written by a supervisor who has no idea what AI actually is, but desperately wants it shoehorned into the next project because it’s the latest buzzword.
Guys we need AI on our blockchain web3.0 iot. Just imagine the synergy
Here we have a straight-shooter with upper management written all over him
Edit: actually, read zerakith’s comment instead.
Gnome and other desktops need to start working on integrating FOSS AI models so that we don’t become obsolete.
I don’t get it. How Linux would become obsolete if it don’t have native AI toolsets on DMs? It’s not like Linux desktop have a 80% market share. People who run Linux desktop as daily drivers are still niche, and most don’t even know Linux exists. They grown up with Microsoft and Apple shoving ads down their throat, and that’s all they know. If I need AI, I will find ways to intergrate to my workflow, not by the dev thinks I need it.
And if you really need something like MS’s Recall, tgere is a FOSS version of it.
Its a good point but you can always have even lesa market share.
A floss project’s success is not necessarily marked by its market share but often by the absolute benefit it gives to its users. A project with one happy user and developer can be a success.
I won’t rehash the arguments around “AI” that others are best placed to make.
My main issue is AI as a term is basically a marketing one to convince people that these tools do something they don’t and its causing real harm. Its redirecting resources and attention onto a very narrow subset of tools replacing other less intensive tools. There are significant impacts to these tools (during an existential crisis around our use and consumption of energy). There are some really good targeted uses of machine learning techniques but they are being drowned out by a hype train that is determined to make the general public think that we have or are near Data from Star Trek.
Addtionally, as others have said the current state of “AI” has a very anti FOSS ethos. With big firms using and misusing their monopolies to steal, borrow and coopt data that isn’t theirs to build something that contains that’s data but is their copyright. Some of this data is intensely personal and sensitive and the original intent behind the sharing is not for training a model which may in certain circumstances spit out that data verbatim.
Lastly, since you use the term Luddite. Its worth actually engaging with what that movement was about. Whilst its pitched now as generic anti-technology backlash in fact it was a movement of people who saw what the priorities and choices in the new technology meant for them: the people that didn’t own the technology and would get worse living and work conditions as a result. As it turned out they were almost exactly correct in thier predictions. They are indeed worth thinking about as allegory for the moment we find ourselves in. How do ordinary people want this technology to change our lives? Who do we want to control it? Given its implications for our climate needs can we afford to use it now, if so for what purposes?
Personally, I can’t wait for the hype train to pop (or maybe depart?) so we can get back to rational discussions about the best uses of machine learning (and computing in general) for the betterment of all rather than the enrichment of a few.
Right, another aspect of the Luddite movement is that they lost. They failed to stop the spread of industrialization and machinery in factories.
Screaming at a train moving 200kmph hoping it will stop.
So, lick the boot instead of resisting you say?
Work on useful alternatives to big corpo crapware = lick the boot?
Mkay…
It was more in response to your comments. I don’t think anyone has a problem with useful FOSS alternatives per se.
You misunderstand the Luddite movement. They weren’t anti-technology, they were anti-capitalist exploitation.
The 1810s: The Luddites act against destitution
It is fashionable to stigmatise the Luddites as mindless blockers of progress. But they were motivated by an innate sense of self-preservation, rather than a fear of change. The prospect of poverty and hunger spurred them on. Their aim was to make an employer (or set of employers) come to terms in a situation where unions were illegal.
They probably wouldn’t be such a laughing stock if they were successful.
All we have are words or violence.
I’ve never heard anyone explicitly say this but I’m sure a lot of people (i.e. management) think that AI is a replacement for static code. If you have a component with constantly changing requirements then it can make sense, but don’t ask an llm to perform a process that’s done every single day in the exact same way. Chief among my AI concerns is the amount of energy it uses. It feels like we could mostly wean off of carbon emitting fuels in 50 years but if energy demand skyrockets will be pushing those dates back by decades.
My concern with AI is also with its energy usage. There’s a reason OpenAI has tons of datacenters, yet people think it does not take much because “free”!
It’s a surprisingly good comparison especially when you look at the reactions: frame breaking vs data poisoning.
The problem isn’t progress, the problem is that some of us disagree with the Idea that what’s being touted is actual progress. The things llms are actually good at they’ve being doing for years (language translations) the rest of it is so inexact it can’t be trusted.
I can’t trust any llm generated code because it lies about what it’s doing, so I need to verify everything it generates anyway in which case it’s easier to write it myself. I keep trying it and it looks impressive until it ends up at a way worse version of something I could have already written.
I assume that it’s the same way with everything I’m not an expert in. In which case it’s worse than useless to me, I can’t trust anything it says.
The only thing I can use it for is to tell me things I already know and that basically makes it a toy or a game.
That’s not even getting into the security implications of giving shitty software access to all your sensitive data etc.
If you are so keen on correctness, please don’t say “LLMs are lying”. Lying is a conscious action of deceiving. LLMs are not capable of that. That’s exactly the problem: they don’t think, they just assemble with probability. If they could lie, they could also produce real answers.