

It’s already in use responsibly.
And irresponsibly.
Turns out that you can’t really argue the slope of responsibility as a way to shoot down a tool, when that’s an individual choice of how someone uses that tool.
It’s already in use responsibly.
And irresponsibly.
Turns out that you can’t really argue the slope of responsibility as a way to shoot down a tool, when that’s an individual choice of how someone uses that tool.
I love seeing these outside views from folks who aren’t developers 🤣
Gen AI is pretty well integrated into development pipelines at this point. In ways that are subtle and quite useful.
Especially autocomplete as you write code, and boiler plate autofill. These used genai, are subtle and not necessarily intrusive, and are pretty widely integrated across the development ecosystem.
Like everything the poison makes the dose. The larger the dose of genai the more poison you are introducing into your work.
These are all holes in the Swiss cheese model.
Just because you and I cannot immediately consider ways of exploiting these vulnerabilities doesn’t mean they don’t exist or are not already in use (Including other endpoints of vulnerabilities not listed)
This is one of the biggest mindset gaps that exist in technology, which tends to result in a whole internet filled with exploitable services and devices. Which are more often than not used as proxies for crime or traffic, and not directly exploited.
Meaning that unless you have incredibly robust network traffic analysis, you won’t notice a thing.
There are so many sonarr and similar instances out there with minor vulnerabilities being exploited in the wild because of the same"Well, what can someone do with these vulnerabilities anyways" mindset. Turns out all it takes is a common deployment misconfiguration in several seedbox providers to turn it into an RCE, which wouldn’t have been possible if the vulnerability was patched.
Which is just holes in the swiss cheese model lining up. Something as simple as allowing an admin user access to their own password when they are logged in enables an entirely separate class of attacks. Excused because “If they’re already logged in, they know the password”. Well, not of there’s another vulnerability with authentication…
See how that works?
Please to see: https://github.com/jellyfin/jellyfin/issues/5415
Someone doesn’t necessarily have to brute Force a login if they know about pre-existing vulnerabilities, that may be exploited in unexpected ways
Fail2ban isn’t going to help you when jellyfin has vulnerable endpoints that need no authentication at all.
Jellyfin has a whole host of unresolved and unmitigated security vulnerabilities that make exposing it to the internet. A pretty poor choice.
https://github.com/jellyfin/jellyfin/issues/5415
And it won’t scale at all!
Congratulations, you made more AI slop, and the problem is still unsolved 🤣
Current AI solves 0% of difficult programming problems, 0%, it’s good at producing the lowest common denominator, protocols are sitting at 99th percentile here. You’re not going to be developing anything remotely close to a new, scale able, secure, federated protocol with it.
Nevermind the interoperability, client libraries…etc Or the proofs and protocol documentation. Which exist before the actual code.
Wayyyyyy less than 20%.
Even removing, incredibly liberal, bot percentages from reddit Lemmy is still < 0.001% of the audience
It’s a solution to a problem Lemmy will soon have in that case.
Which is bots.
Lemmy isn’t flooded with bots and astroturfing because it’s essentially too small to matter. The audience is something like < 0.001% that of reddit.
Once it grows the problem comes here as well, and we have no answers for it.
It’s a shitty situation for the internet as a whole, and the only solution is verifying humans. And corporations CANNOT be trusted with that kind of access/power
Fill balloons full of lube and throw it at them
2 years ago I talked about the core problem with federated services was the abismal scale ability.
I essentially got ridiculed.
And here we are, with incredibly predictable scaling problems.
If we refuse to acknowledge problems till they become critical, we will never grow past a blip on the corner of the internet. Protocol development is HARD and expensive.
You can’t really host your own AWS, You can self-host various amalgamations of services that imitate some of the features of AWS, but you can’t really self-host your own AWS by any stretch of the imagination.
And if you’re thinking with something like localstack, that’s not what it’s for, and it has huge gaps that make it unfit for live deployment (It is after all meant for test and local environments)
Or peach 🍑 or splash 💦
Kind of dumb really, I hate censorship.
The hard part is in the scripting, the retries, the back off, automation, queuing and queue management…etc
At that point I’m implementing my own bootleg TubeArchivist 😅
Oh it’s definitely an easy to read DB. But that’s still beyond the point IMHO.
If you can’t reconstruct the state of your files without 3rd party software to interpret them, then they are not in an archive format.
One should be able to browse their data using OS native tools on an offline device push comes to shove.
I mean at this point you’re just being intentionally obtuse no? You are correct of course, volatile memory if you consider it from a system point of view would be pretty asinine to try and store.
However, we’re not really looking at this from a system’s view are we? Clearly you ignored all the other examples I provided just to latch on to the memory argument. There are many other ways that this data could be stored in a transient fashion.
I mean, it’s more complicated than that.
Of course, data is persisted somewhere, in a transient fashion, for the purpose of computation. Especially when using event based or asynchronous architectures.
And then promptly deleted or otherwise garbage collected in some manner (either actively or passively, usually passively). It could be in transitory memory, or it could be on high speed SSDs during any number of steps.
It’s also extremely common for data storage to happen on a caching layer level and not violate requirements that data not be retained since those caches are transitive. Let’s not mention the reduced rate “bulk” non-syncronous APIs, Which will use idle, cheap, computational power to do work in a non-guaranteed amount of time. Which require some level of storage until the data can be processed.
A court order forcing them to start storing this data is a problem. It doesn’t mean they already had it stored in an archival format somewhere, it means they now have to store it somewhere for long term retention.
Well yeah!
That’s the CD part :)
We’re rolling the same thing, except with all our cloud infrastructure, our code, and various integrations.
Automatic deployments are so great, as long as you trust your integration process and test suites.
There’s a reason we value the local development environment.
You can run everything locally, the only use for the cloud environment is for CD.
Only if you don’t have the critical thinking to understand how information management is a significant problem and barrier to medical care.
Being able to research and find material relevant to a patient’s problem is an arduous task that often is too high a barrier for doctors to invest in given their regular workloads.
Which leads to a reduction in effective care.
By providing a more efficient and effective way to dig up information that saves a ton of time and improves care.
It’s still up to the doctor to evaluate that information, but now they’re not slogging away trying to find it.