Polarized cults of love and hate of AI – how to use it for to do good and not bad
Polarized cults of love and hate of AI – how to use it for to do good and not bad

Polarized cults of love and hate of AI – how to use it for to do good and not bad

My TLDR: Every tool can be used for both good and bad, not just one or the other, let's learn some of the good use cases, and how to minimize the bad ones, instead of rejecting AI by only focusing on the bad and rejecting the idea it could even possibly be good. It in fact can, and here are many ways how to make it good and avoid it being bad, and how it can still improve: Yeah, you're gonna have to read it, there's a lot, I can't just compress it all to a lazy one line that will tell you most of my whole text, but let's try if the following only single paragraph TLDR written by AI in this entire post, better than me:

AI TLDR: AI isn’t good or evil — it’s a tool that reflects how we use it. The hate it gets comes more from societal misuse than the tech itself: job loss without UBI, plagiarism, or lazy reliance. Yet it can teach, translate, code... when used responsibly. We should fix its flaws, regulate abuse, and use it collaboratively instead of worshiping or demonizing it — every major invention went through that same growing pain.

I prefer centrist views finding balance between sides, and that things are never black and white, 100% good or 100% evil. But people tend to increasingly get such opinions of AI, and I find both of them cringe. But I'm definitely seeing the haters far outnumber the lovers, so I typically keep having to defend the better side of AI.

Whenever I say anything positive about AI anywhere, the haters can't resist replying that there's so such thing as a decent AI, they are 100% evil, only usable for plagiarizing and making people dumber, replacing jobs, and using ludicrous amounts of energy to do it. It can't ever be good for anything, it has already peaked and there's nothing to improve, and everything about it is fundamentally bad. Or is it? Well, it can be both, and both the devs and us can steer it for better than the current shit show.

Well if it is capable to replace a human, then it's fucking useful, and any hurt from that is just from a bad societal setting. Why wouldn't we want to get our jobs automated, if we could get the fruits of that machine labor for free? The problem is just that we are not getting that for free, the billionaires are keeping it all, and we are not implementing taxes on machines and pay an UBI to the replaced people, so they could afford the fruits.

The energy needs can be optimized, there are developments in analog chips that are just perfect for fast and low power AI operations, and routing to weaker models for easier prompts for example.

Hallucinations can be reduced by punishing AI for wrong answers, so that it would stop trying to guess when it doesn't know just for a small change of it being intuitively right. Since we weren't doing that yet, it was adopting this bad guessing strategy because "I don't know" wasn't worse than a bad answer.

AI was made to be able to pass college level exams, so getting pretty good answers for college levels questions is a pretty damn decent use case, and has potential to make you smarter. Having it explain hard to grasp concepts to you like a professor that really know how to speak understandably is educational.

Translating is also something it's typically better at than other kinds of machine translations, especially for languages that are vastly different, like Asian vs English, where typical translations don't even make sense.

If you're seeing people getting dumber and plagiarism, that's just the BAD use cases, but it doesn't warrant denying the existence of the good ones.

Coding is also a great use case, which is also often criticized because of people misusing it, I find AI to be a valuable tool for coders, as long as they are using it correctly, and not blindly copy-pasting code they don't understand. If you copy past AI code without reading and understanding it, and it breaks or bloats your codebase, that's on you. Don't blame the tool, blame the lazy human who doesn't review the code, that was their responsibility, never blindly trust AI without checking, we know it can make mistakes.

I even think there is a good way to use it for art, of course not by having talentless people make a simple prompt to plagiarize someone's copyrighted style, but actual artists using it as another tool in their box, iteratively. Having it add templates of things, and then maybe drawing over them to match your style and ideas, wherever it did it not to your liking. Or privately feeding it your style and have it supplement yourself. That way, your art can keep being pretty much your art with your ideas, but more efficient. It could speed up some boring parts you're not as good at and don't want to bother wasting as much time on. You could for example draw the full line art, get it to color it with your style with your own instructions, fix up anything not to your style and liking.

...Well it might be both too late and too early for having it only know you privately, since it's already publicly trained with copyright, but instead of only being mad as you deserve to be, you can use its knowledge of you to supercharge your own art and do it better than those plagiarizing hacks, and meanwhile also keep pushing for sensible AI copyright laws.

It's a tool and a tool should be inter-used 50:50 with a human supervisor, without that supervision you're just risking it to do something you would not agree with (also to further prevent misbehavior we could train another AI that is not aligned with the primary AI's interests that is rewarded by snitching on the primary AI's misbehavior, to assist the human supervisor, who should still also do their job as well).

So as tool, it can have both good and bad use cases, that's why I think people who only 100% love or 100% hate specific tools are just cringily blinded by one of the sides, and religiously try to convert everyone who acknowledge the other side to only their 100% sided opinion.

Tools can be used for constructive and destructive use cases, and history shows us they are typically badly abused first, and then we grow up to use them in better way, I believe AI will meet the same fate, we could keep solving problems with hallucinations, improving detection of plagiarism, pass laws to minimize the abuse, implement the stuff I talked above etc... If we just hated and rejected any new invention as soon as it was abused first, we would still be in caves.

There are for sure vastly problematic issues with AI that absolutely need to be tackled, but let's not make that stray us just into a pure hate of denying all the potential.

And for the last, let me apologize for a tiny lie I told in the TLDR. No don't worry, I still wrote this thing myself, but actually that AI paragraph - while I loved what the AI did there, there were two words, while true to the post, made me a bit afraid to make a bad impression if read that soon before the post itself - so I deleted them. Which further shows that you still should not just copy 100% and keep your own input as well, like I already said. As a small challenge: can you guess what the two deleted words were, heh?

submitted by /u/skr_replicator
[link] [comments]