

demands for interoperability with non-Apple products
the rules were not applied to Samsung
I wonder if Samsung owns a walled garden where their products aren’t interoperable with those from other manufacturers.


demands for interoperability with non-Apple products
the rules were not applied to Samsung
I wonder if Samsung owns a walled garden where their products aren’t interoperable with those from other manufacturers.
It does also suplex dragons onto your house.


Daily Mail’s actual headline (according to her via her Bluesky account)
Bloodthirsty trans author mocks Charlie Kirk’s looks after she was fired by prestigious publisher for joking about his murder


You’re closer to the work reading a translation more accessible to you than reading the original words with language friction. So you shouldn’t feel bad about reading a translation.
Are you calling the people on the receiving end of the genocide “genocidal”? Is your argument that if they just committed suicide, the pitiable Nazis wouldn’t need to go through the bore of conducting the genocide, so their refusal equates to demanding a genocide?
Beyond even that, they praise inviting people to negotiate and then murdering those negotiators.


The headline is their headline. They’re saying a Russian air attack ignited the building.


old accounts, with maybe 1 comment per month
Isn’t that just lurking behaviour?


establish a platform discussing history of Tiennaman square or Uyghurs without strictly adhering to government set guidelines, then they will likely be prosecuted.
Tienanmen square has a 600 year history. You’re referring to one event, which is censored. But even that doesn’t cover the portion that is relevant to the history of Tiennaman, no part of the protests is censored. Uighur history also doesn’t have any censors.
It is true that you have been able to identify one censor in your two topics (albeit with inaccurate wording). It’s also a particularly sensitive topic with strong disinformation campaigns targetting it. In 2020, many states worldwide issued censors on COVID and vaccine related topics for similar reasons.


300i https://www.bilibili.com/video/BV15NKJzVEuU/
M4 https://github.com/itsmostafa/inference-speed-tests
It’s comparable to an M4, maybe a single order of magnitude faster than a ~1000 euro 9960X, at most, not multiple. And if we’re considering the option of buying used, since this is a brand new product and less available in western markets, the CPU-only option with an EPYC and more RAM will probably be a better local LLM computer for the cost of 2 of these and a basic computer.


That’s still faster than your expensive RGB XMP gamer RAM DDR5 CPU-only system, and you can depending on what you’re running saturate the buses independently, doubling the speed and matching a 5060 or there about. I disagree that you can categorise the speed as negating the capacity, as they’re different axis. You can run bigger models on this. Smaller models will run faster on a cheaper Nvidia. You aren’t getting 5080 performance and 6x the RAM for the same price, but I don’t think that’s a realistic ask either.


I agree with your conclusion, but these are LPDDR4X, not DDR4 SDRAM. It’s significantly faster. No fans should also be seen as a positive, since they’re assuming the cards aren’t going to melt. It costs them very little to add visible active cooling to a 1000+ euro product.


You can run llama.cpp on CPU. LLM inference doesn’t need any features only GPUs typically have, that’s why it’s possible to make even simpler NPUs that can still run the same models. GPUs just tend to be faster. If the GPU in question is not faster than an equally priced CPU, you should use the CPU (better OS support).
Edit: I looked at a bunch real-world prices and benchmarks, and read the manual from Huawei and my new conclusion is that this is the best product on the market if you want to run a model at modest speed that doesn’t fit in 32GB but does in 96GB. Running multiple in parallel seems to range from unsupported to working poorly, so you should only expect to use one.
Original rest of the comment, made with the assumption that this was slower than it is, but had better drivers:
The only benefit to this product over CPU is that you can slot multiple of them and they parallelise without needing to coordinate anything with the OS. It’s also a very linear cost increase as long as you have the PCIe lanes for it. For a home user with enough money for one or two of these, they would be much better served spending the money on a fast CPU and 256GB system RAM.
If not AI, then what use case do you think this serves better?
This is still putting some equivalency on a non-aggression treaty and actual military alliance.


Does it have any sort of on-board NPU to make it AI-oriented?


Isn’t that bad optics for Ukraine? If they’re killing more and capturing less, then that has to mean something like Ukraine is killing POWs (a war crime), or Ukrainian soldiers have a higher desire to surrender, which could be indicative of idealogical sympathy or of horrible treatment by the army.


Unalive started being widely used 2020-2021.
Is this about pubs in Canada? You play music like it’s a club?
Nah, Zelensky hasn’t denied something Trump requested of him, he’s carrying out the plan. The USA wants out optically, but they want the war to continue, with EU nations managing it. They couldn’t be happier with him.
It’s also their economy. They might refuse because they want to eat.