I found the multicast registery here.
https://www.iana.org/assignments/multicast-addresses/multicast-addresses.xhtml
I already knew that addresses between 224.0.0.1 and 239.255.255.255 are reserved by multicast.
Obviously multicast could be immensely useful if used by the general public, it would obsolete much of facebook, youtube, nearly all CDNs (content delivery networks), would kill cloudflare and company’s business model and just re-arrange the internet with far reaching social implication.
So, why hasn’t all these multicast addresses been converted in usable private IPv4 unicast address space ?
Multicast wouldn’t really replace any of the sites you mention because people want and are used to on-demand curated content.
It’s also not as practical as you make it sound to implement it for the entire internet. You claim that this would be efficient because you only have to send the packets out once regardless of the number of subscribers. But how would the packets be routed to your subscribers? Does every networking device on the internet hold a list of all subscriptions to correctly route the packets? Or would you blindly flood the entire internet with these packets?
I think you are missing the part of the intent of the question. Multicast is wasteful in a large chunk of IPv4 range. If it were a smaller range, the leftover IP’s would be available for general use.
You’re correct in that it wouldn’t help for the other reasons OP noted, since CDN’s do all that heavy lifting already, and do it better than pure multicast could (geo-location, for example).
people want and are used to on-demand curated content.
If we had had a multicast backbone, this would already be a solved problem, the curation would be crowdsourceds, publish would be auto-curated through cryptographic verified consensus reputation, nodes emitting a history of opinions about other nodes, anonymous but with a reputation history and we’d have the abuse part out of the game already, instead we got Zuck’s faceless Jannies wiping our collective butts !
It’s also not as practical as you make it sound to implement it for the entire internet.
This is the for-profit network operator’s consensus view, which their profits are in part made from selling the solution to the disabling of multicast back to us. I don’t believe it is meaningfully a technical problem if there had been the will, it would already have been done.
Does every networking device on the internet hold a list of all subscriptions
The routers do (not every), yes, the MBGP table will be megabytes long and extremely dynamic, this is impossible to solve in 1980, a crushing challenge in 1990, feasible but not economically expendient in in 2000, and globally trivial but opposed to financial interest in 2010 and “contempt of business model” in 2020.
As far as the router, it’s a challenge on par with keeping DNS running with 2000s hardware.
Or would you blindly flood the entire internet with these packets?
No, that’s broadcast
globally trivial
Please share your trivial solution then.
We organize multicast nationally and globally like we do RF band plans. Some addresses are reserved to stream advertising that anyone can pickup with the global, national, regional, metropolitan, city, town, street levels.
Eligible host subscribe, or advertise their “subscription” to a particular address (and port, we’ve got 65536 ports and most communications between two hosts use just one)
The subscription are broadcast within their scope, pooled into distributed tables copied in bulk between routers. It’s simple association of multicast groups and subscribed hosts.
The end result is that at minimum all routers existing between an host and their subscriber have a copy of the multicast group membership for that address.
When a packet to that address arrives on any router in between, the route trigger and the router sends it down each of its WAN port that has a unicast subscriber down stream.
That’s basically the multicast process with just a little improved protocols and caching for effciency.
I say a little improved, but I think it’s already all there, there burned into the silicon already. It’s just a matter of turning it on and politicians putting on the screws on ISPs to make them play nice.
The bulk of it is already in these protocols
IGMP (Internet Group Management Protocol) PIM (Protocol Independent Multicast) PIM Sparse Mode (PIM-SM) PIM Dense Mode (PIM-DM) MSDP (Multicast Source Discovery Protocol) MBGP (Multiprotocol BGP)
There might be still a bit of glue to get there, but on the whole, on a technical front this is less technology than it took to get bittorrent to work. And bittorrent works, really really well !
We’re going to need more client side software, but that will come as soon as “multicast works” because it just didn’t make sense to make global multicast stream browser when there was no global multicast
We’re talking stream browsers, viewer clients for all kinds of media types, video viewer “tv” “radio”, text streams, notification streams, things we cannot even imagine yet
And we’ll need a non-censorious curation system, anonymous cryptographic crowdsourced reputation system, that’s “letsencrypt” on steroids. Voting and beyong-voting systems of likes, dislikes, superlikes, blocks, bans, replies, forward, crosspost all the social media stuff but floating mid-air without a single server or janitor managing it all, just the same abuse prevention system that deal with DDOS and SPAM, everything else is fair game and section 230 protected.
They are still in use as multicast. Typically it’s for local traffic.
I don’t think multicast over the internet would have taken off as multicast requires all routers between the source and any destinations to be multicast aware. Each would need to keep track of the subscriptions, meaning more resources that would mean higher cost. There was also less interest as one of the pluses of internet delivery was that delivery was on demand.
In the end cdns were going to be created anyway for static content and streaming could just use the same systems to produce effectively the same improvements.
But your next question would be why have they not done it for the experimental range.
Well, everything knows those packets are not on the internet so will block them. If you want to ask the internet to upgrade everything for that, well just ask how the ip6 upgrade is going.
Each would need to keep track of the subscriptions, meaning more resources that would mean higher cost.
They need to do it with unicast, which necessarily takes more resources to do. Think of it, 500 unicast stream or a single multicast stream, it’s not even close how much less computing power multicast takes.
Make no mistake, multicast is broken by choice. Working multicast is “contempt of business model”, it would cannibalize CDN profits to become as free as unicast.
the same systems to produce effectively the same improvements.
One crucial distinction is that you as an individual, will Zuck’s permission to use their system, in their way, in their rules.
And of course by “Zuck” I mean, “the cloud” aka “someone else’s computer”, which was not needed if multicast did just work, another enforced cloud dependency
1/16 of all IPv4 addresses were reserved for PUBLIC USE but they remain firmly in the grasp of private hands, private hand that want you to pay the toll and obey their masters
well just ask how the ip6 upgrade is going.
My GPON FIBER ISP said “we’ll probably never implement IPv6”, even though every single piece of equipment on their network supports it, even their horrible rebadged Huwawei routers
It won’t work and it will keep not working until we make them.