

Yeah I didn’t want to make the bold and refreshing assertion that arch isnt appropriate for situations where gracefully handling an old package is a requirement but that was my initial read on the situation.


Yeah I didn’t want to make the bold and refreshing assertion that arch isnt appropriate for situations where gracefully handling an old package is a requirement but that was my initial read on the situation.


I’m not as familiar with the aur as I am with apt and now dnf, is there a function to keep it from automatically installing something newer? That’s why I meant when I referred to pinning.


If arch doesn’t have version pinning then switch to a distribution that does.
Debian has version pinning, nvidia runs a third party repository and it has a pinning package you can install to get and stay with the 580 branch.
Pardon, your replies in this comm. It’s not precise language on my part but I think the meaning should be clear.
Without knowing what games you want to run or what your budget is it would be hard to give more helpful input than “anything will work, give serious consideration to not virtualizing”.
What were you looking for, models and specs?
E: you are absolutely looking for models and specs. I assumed you were just feeling around to figure stuff out because of your other posts in this comm. My apologies.
The short answer is that it doesn’t matter for the requirements you’ve given. Just to make sure I wasn’t lying when typing that I created and ran a windows 11 vm under kvm running on Debian installed on an old thinkpad from ten years ago and it ran fine. The specs were i5-3320m 16gb ram. I was able to start and run affinity and nuclear throne. I only made a 30gb qcow device for that vm so you probably don’t need a 1tb disk…
Assuming you want to run more modern games, both the recent (<5 or so years ago) intel and amd integrated graphics perform decently on 1440 and 1080 which is what a lot of laptops have for screens.
Laptops with replaceable ram are rarer than they once were, but can still be had and any laptop with ddr4 will be less expensive than one with ddr5. You don’t seem to have any use case that needs faster ram, so that’s a cost/performance tradeoff you may be willing to make.
I would personally stay away from “gaming” oriented laptops because they’re generally optimized around performance and price with build quality, durability and longevity left by the wayside.
So for specs I’d say a recent cpu with igpu (it’s hard to find one in a lap nowadays that doesn’t have the igpu!), 16gb of ddr4 if it’s upgradable and 32gb of ddr4 if it’s not and maybe 512gb of storage if it’s soldered and 256 or whatever if it’s not.
Again, if you have specific games you want to run then that changes things.
Most games run good under wine/steam. Most of the ones that don’t are using programming techniques intended to catch a vm, hypervisor or host os like anti-cheat.
So you can probably take gaming off your vm uses list. If you can’t because you wanna run games that use anti cheat as above, skip to the bottom of this reply.
I do not use affinity, but my experience with applications that have an “output” like design, modeling or productivity is that it’s often not worth it to run the under some compatibility layer or virtualization system. Every time you start that program up you need it to run so you can blast out an idea, show someone how the project is going or open something someone sent you and it’s infinitely more frustrating to have to figure out what changed since last night to make it not work or cause the magic marker brush (and only the magic marker brush!) to cause an immediate crash. This might also be a “jump to the end” scenario. Try it first and see though!
Windows 11 has relaxed requirements for its iot versions. It both loads less into cache and requires less memory in addition to opening up to CPUs as far back as third and fourth generation Intel core chips from 14 years ago. So use that version of windows for your vm and you can easily scrape by with 16gb of ram if you see yourself needing to.
Most people like amd gpus better on linux, I tend to like nvidia better at the moment. I have a lot of experience with linux and high tolerance for troubleshooting though so your mileage may vary.
This is some counterintuitive input and I will not be answering questions about it, take it or leave it: if you plan to keep your computer for a while, buy something with a cpu manufactured on the largest “process” you can reasonably accept. As chips’ features get smaller and smaller it takes less time and energy for electromigration to fundamentally change their behavior.
If you find yourself needing to run games or even software packages that care deeply about knowing they’re on bare metal windows, just dual boot. It will only take a little time to boot back and forth and the only prerequisites are learning your distros grub repair process for if windows overwrites your bootloader and keeping backups so you don’t panic which you should be doing anyway.
Do you have a 12k or 16k one?
What printer are you using? I have a h frame knockoff of one from the pre covid days but need to get a resin.
Well, one example of a timing attack is replaying. It’s a fucking classic, chefs kiss kind of signaling attack where you bypass the need to understand what’s going on by just saying it verbatim using your capacity to accurately reproduce some information, easily sidestepping all kinds of shibboleths.
Before computers replaying was used during both ww1 and ww2 to confuse and misdirect radio operators and back when keyless entry was a newfangled thing it was used to spoof the unique signals each manufacturer chose to use. Even after they all switched to rolling codes, replaying is a way to both desynchronize the owners fob and replay their command at almost the same time, getting you into the car.
In computing, replaying would be a fantastic way for a man in the middle to pretend like he knew some password or was some service, indicated by an encrypted or hashed transmission the man in the middle could just store and replay. Darla can listen in to the way alfalfa says the password to the he man woman haters club and with good practice, recite it convincingly!
If Darla were a computer then even alfalfas securely hashed password would be no problem because she doesn’t need to pronounce it, but just reproduce it in all its unpronounceable hexadecimal glory.
But if the instructions for the he man woman haters club authentication was instead an encrypted transmission saying “the clubs clock says it’s 4:15.45.6789 pm April twentieth 1969. When you reply with your password hash, include the clubs clock time down to the millisecond.” Now Darla can’t just replay alfalfas hashed authentication token because it’s the wrong time!
Because of ntp, girls remain not allowed.
How would such an attack affect the printer? Who can say! I can speculate that an interloper could make it do things the user could, like print stuff, burn up the nozzle or smash into its extents. The printer controller is basically just a little computer so gaining access to it as an authenticated user might make it easier to escalate privileges and use it like any other computer might be used by a malicious actor as well.
Let’s say though, that part of the out of the box setup is connecting to the printer through some app or program. You want encrypted tls for that and you want the user or their software to exchange certificates to make it all official, but that technology requires that time be synchronized between the two devices in order to do so. If the printer has inaccurate enough time it can’t even negotiate a secure connection with the owners phone app they use to send it instructions.
So ntp makes sense in this case. If you’re gonna be doing communication you gotta do it responsibly and it’s good that iot stuff like this is making some effort!
E: I realized I glossed over some stuff assuming you’d make some jumps and in the cold light of morning that might not actually be the case.
Uhh, let me be clear: what is much closer to reality is that the guard doesn’t assert the clubs clock time but instead relies on all parties knowledge of utc (gmt but long and made by fat people) and there’s a snappy little back and forth between Darla and the door guard at the hmwhc where Darla asserts her systems time a few times and the door guard is able to use that to reasonably expect what a hashed token containing her understanding of the current time could be off by.
Now when she tries to pass alfalfas hashed password and current time token she fails because it doesn’t match her expected time or the door guards and there’s even possibly a record of who asserted and when, giving a forensic chain that can be used in legal proceedings against women!
Hashing is exactly what it sounds like, but math. Just like at the breakfast spot, it’s when you take your easily recognizable potato with the word “formosissima” carved into it and cut it into a bunch of equally sized pieces. It’s still all there, just made into a hash! If someone knew exactly your cutting process they could put the tater back together and have a big ol meltdown over it.
That simple system where you figure out how incapable of telling time your unknown, untrustworthy weirdo attempting to gain access to some resource is, then expect them to cough up a token that contains hashes of both that resources shared secret and the time and then check em out to make sure they’re reasonable is a super straightforward way to implement the “actually look at the id being presented at the bar” test in computers.
This is so widespread that you can’t do credit card transactions without it, can’t establish secure http connections to websites without it and most certainly can’t responsibly pass credentials back and forth without it.
But why would that be required if the printer is only on the local network? Because we can’t trust the local network any more than the internet! The end user could be responsible Laura, carefully considering what devices are allowed on her local network and rotating keys when any of those devices spend time on another network or they could be Steve who just kinda does whatever and hasn’t changed his password in over a decade.
What if we could trust the local network though? Well, in that case you’d want to be coloring inside the lines of the end users expected behavior a make sure to not predicate operating the thingamabob you’re selling on making insecure http connections that the browser pitches a hissy fit over.
Okay but what specifically do bad guys try to do with machines they’ve escalated privileges on? Well botnets are the obvious answer. Iot devices like appliances and whatnot are a juicy target too because once you’ve go one you can more easily find a quicker and more persistent way to get into more and suddenly that million plus unit install base is your oyster!
With all that hopefully a little clearer, why should these things even be on the local network? Well, you can’t do network printing if you’re not on the network, can you! It’s a feature that brings a lot more users to the table and removes reliance on any particular weird port or the increasingly untrustworthy universal port. Nowadays people expect it even!
The types of attacks can be mischaracterized as “race conditions but over the network”. Theres about forty years of history here and it’s way more complicated so unless you really wanna get into it I’ll leave it there.
The printer doesn’t know if it’s plugged into a private network or is internet facing. Timing attacks can occur on private networks as well as on the internet. Having accurate utc is almost always a prerequisite for communicating with other devices.
Therefore, the printer needs to know what time it is. It does this through ntp on port 123 just like phones, computers and network connected paper and ink printers do.
No reason to be skeptical, teams and groups are very trustworthy so teamgroup is a lock.
Anything that connects to the network needs a synchronized clock with other devices it directly communicates with in order to make sure it’s not being subjected to timing attacks. This has been standard practice for 25 years, maybe more, in the end user world because some high profile computer screw ups made use of it. People with weird systems, off the gridders of olde and ppl still on dial up in the teens had some interesting problems to solve when generally all ISPs got drug kicking and screaming to the table by os updates that made synchronized clocks a non negotiable requirement.
Reseat the stick you installed and run memtest 86.
It’s more likely that you have a badly installed stick or a faulty stick than consumer memory controllers in the last 20 years care about the installed memory being the same.


You’re getting bad advice.
If you don’t expect to actually be shuffling packets back and forth or doing any kind of quality of service or vpn or really anything then the pi will be the better choice just barely because of its super low power consumption at idle. In that situation you would be at idle enough to actually justify using the pi. It would suck in the same way that using a pi for stuff usually sucks but you could justify it maybe.
If you plan to have a bunch of hosted stuff, a seedbox, qos, manage vpn connections and especially upgrade your lan to 1gb + later on down the line, the mini pc will actually be more efficient per cycle. In that circumstance you’d be at idle less, and the mini pcs more powerful processor, wider bus and expandability would make it less of a bottleneck presently and down the road.
Risc CPUs like the arm in the raspberry pi are really good at not doing anything, or doing a really small subset of things (it’s in the name!), but x86 is great at doing some stuff and being able to do a wide variety of stuff with its big instruction set. If you raise an eyebrow at my claim, consider that before gpus were the main way to do math in a data center it was x86. If the people who literally count every fraction of a watt of power consumption as billable time think it’s most efficient it probably is!
With ~08+ CPUs ability to turn cores and functions off at the clock tree and communicate back and forth with the os to orchestrate and coordinate it, there’s not as much daylight between the power usage of a pi and a mini pcs as some of these comments might make you think.
The long and the short of it is that you’ll most likely have a better time using the mini pc than the pi and claims that it’ll bankrupt you with power bills are greatly exaggerated.
In terms of privacy, I’d go for the mini pc. All your packages are most likely going to be open source, but the x86 stuff gets more scrutiny and isn’t as “magic blobby” as the arm world is.
Source: I have used over twenty different pi variants including knockoffs, wrote for microcontrollers before they were called sbcs, host a bunch of services on x86 which are monitored for their power usage using a power distribution controller by my lovely wife who keeps an eagle eye on the bills and I literally registered an account because people were telling you the wrong thing on the internet.
If you wanna verify that for yourself, get a cheap plug em in power meter and try both units running the package you choose under some artificial load like managing qos between a device streaming 4k and one torrenting 50 different Linux isos.
Are you sure the animations it’s giving you are ai? Android recently got the thing where the phone actually takes a short movie and picks a frame, processes it and hands it to you as the “picture”.