Hacker Newsnew | past | comments | ask | show | jobs | submit | arka2147483647's commentslogin

The advantage, as i see it, is that this could be done incrementally. Every new router/firmware/os could add support, until support is ubiquitous.

Contrast this with ip6, which is a completely new system, and thus has a chicken and egg problem.


That is how v6 worked though. Every router and consumer device supports v6 and has for a very long time now. The holdup ended up being ISPs.

Today it seems most ISPs support it but have it behind an off by default toggle.


Wouldn't this proposal not require isps to do anything? They already assign every user a unique ipv4 address. Then, with this proposal, if I want to have a bunch of computers behind that single ipv4 ip, I could do it without relying on NAT tricks


> Wouldn't this proposal not require isps to do anything? They already assign every user a unique ipv4 address.

The reason there's an IPv4 address shortage is because ISPs assign every user a unique IPv4 address. In this alternative timeline, ISPs would have to give users less-than-an-IPv4 address, which probably means a single IPv4x address if we're being realistic and assuming that ISPs are taking the path of least resistance.


ISPs that want to split it up could do so, while other ISPs just stick to v4. Provided the hosts at least understand v4x.


If that happens then the user can only communicate with hosts supporting IPv4x and you're back to the IPv6 issue


As long as IPv4x support was just something you got via software update rather than a whole separate configuration you had to set up, the vast majority of servers probably would have supported IPv4x by the time addresses got scarce.

However, if it did become a problem, it might be solvable with something like CGNAT.


CGNAT would also be easier on routers too, since currently they need to maintain a table of their port being used to the destination ip and port. Whereas with ipv4x, the routing information can be determined from the packet itself and no extra memory would be required


That's only true when forwarding IPv4x -> IPv4. When you're going the reverse direction and you need to forward IPv4 -> IPv4x, well, still need a table then.


There aren’t enough IPv4 addresses to give everyone one. That is why ISPs use CGNAT to hide multiple customers behind one IP address.

Something that just uses IPv4 won’t work without making the extra layer visible. That may not have been apparent then but it is now.


It's not just ISPs. Tons of services are v4 only.


IPv6 is a parallel system. It exists with IPv4. You don't need to stop using IPv4 - ever - if you don't want to. You can have both the chicken and egg together as long as is needed.

At some point IPv4 addresses will cost too much.


Surely you could have compiler types for 128, 256, 512, etc, and then choose the correct codepath with simple if statement at runtime?


Smart home and lighting standard Matter over Thread requires it. Discovered this after i bought some Ikea smart lights. Though you don't need a public IP6, a local static IP6 with SLAAC is enough.


> Delaware C-corp, UK Ltd is OK too

Neither of which is in EU, which is exactly the point. Should be an EU one which is usable...


The title says "One Europe" and "Pan-European".


This is a EU initiative. Confusingly, EU is often called Europe in spoken/non-official speech. Sort of the same way it is said that Washington does something, when it is the US gov doing something.


Sadly, all the bug trackers are full of bugs relating to char*. So you very much do those by accident. And in C, fixed width strings are not in any way rare or unusual. Go to any c codebase you will find stuff like:

   char buf[12];
   sprintf(buf, "%s%s", this, that); // or
   strcat(buf, ...) // or
   strncpy(buf, ...) // and so on..


Thats only really a problem if this and that are coming from an external source and have not been truncated. I really don't see this as any more significant of a problem than all the many high level scripting languages where you can potentially inject code into a variable and interpret it.

There are certainly ways in which the c library could've been better (eg making strncpy handle the case where the source string is longer than n) but ultimately it will always need to operate under the assumption that the people using it are both competent and acting in good faith.


When you write such code your mental model is C strings, not fixed-width strings, the intended use case for strncpy.


For someone who is not a rust programmer, but would like to keep up to date, can somebody tell me what "Tier 4" is. And why must it be quoted?


Rust has 3 "platform support" tiers (effectively - guaranteed to work, guaranteed to build, supposed to work). However, these are (obviously) defined only for some of the target triples. This project defines "Tier-4" (which is normally not a thing) unstable support for Windows Vista-and-prior


tiers 1-3 are policies[0] for in-tree targets, so by saying tier 4 they mean one implemented in a fork. Though that kind of skips over targets that can get away with just a custom target spec[1] and not modifying the source.

[0] https://doc.rust-lang.org/beta/rustc/target-tier-policy.html [1] https://doc.rust-lang.org/rustc/targets/custom.html


Tier 3 is max official


All the functions mentioned above, even the cpp one, will reserve atleast the number of elements given to resize() or resize_exact(), but may reserve more than that.

After some pondering, and reading the rust documentation, I came to the conclusion that te difference is this: reserve() will grow the underlaying memory area to the next increment, or more than one increment, while reserve_exact() will only grow the underlaying memory area to the next increment, but no more than that.

Eg, if grow strategy is powers of two, and we are at pow(2), then reserve() may skip from pow(2) to pow(4), but reserve_exact() would be constrained to pow(3) as the next increment.

Or so i read the documentation. Hopefully someone can confirm?


> even the cpp one, will reserve atleast the number of elements given

The C++ one, however, will not reserve more than you ask for (in the case that you reserve greater than the current capacity). It's an exact reservation in the rust sense.

> reserve() will grow the underlaying memory area to the next increment, or more than one increment, while reserve_exact() will only grow the underlaying memory area to the next increment, but no more than that

No, not quite. Reserve will request as many increments as it needs, and reserve_exact will request the exact total capacity it needs.

Where the docs get confusing, is that the allocator also has a say here. In either case, if you ask for 21 items, and the allocator decides it prefers to give you a full page of memory that can contain, say, 32 items... then the Vec will use all the capacity returned by the allocator.


As far as I can tell, in the current implementation, reserve_exact is indeed exact. The only situation in which the capacity after calling reserve_exact will not equal length + additional is when it was already greater than that. Even if the allocator gives more than the requested amount of memory, the excess is ignored for the purposes of Vec's capacity: https://github.com/rust-lang/rust/blob/4b57d8154a6a74d2514cd...

Of course, this can change in the future; in particular, the entire allocator API is still unstable and likely won't stabilize any time soon.


Maybe more interestingly, line 659, slightly above that, explains that we know we got [u8] but today the ordinary Rust allocator promises capacity is correct, so we just ignore the length of that slice.

We could, as that comment suggests, check the slice and see if there's enough room for more than our chosen capacity. We could also debug_assert that it's not less room, 'cos the Allocator promised it would be big enough. I dunno if that's worthwhile.


https://en.cppreference.com/w/cpp/container/vector/reserve.h...

says

> Increase the capacity of the vector (the total number of elements that the vector can hold without requiring reallocation) to a value that's greater or equal to new_cap.

I belive that the behaviour of reserve() is implementation defined.


Because there's only a single function here, it has to either be Vec::reserve or Vec::reserve_exact

If you don't offer Vec::reserve_exact then people who needed that run out of RAM and will dub your stdlib garbage. If you don't offer Vec::reserve as we've seen C++ programmers will say "Skill issue" whenever a noob gets awful performance as a result. So, it's an easy choice.


That said MSVC,GCC and clang all implement it to allocate an exact value.


> In either case, if you ask for 21 items, and the allocator decides it prefers to give you a full page of memory that can contain, say, 32 items... then the Vec will use all the capacity returned by the allocator.

It would be nice if this were true but AFAIK the memory allocator interface is busted - Rust inherits the malloc-style from C/C++ which doesn’t permit the allocator to tell the application “you asked for 128 bytes but I gave you an allocation for 256”. The alloc method just returns a naked u8 pointer.


The global allocator GlobalAlloc::alloc method does indeed return a naked pointer

But the (not yet stable) Allocator::allocate returns Result<NonNull<[u8]>, AllocError> --- that is, either a slice of bytes OR a failure.

Vec actually relies on Allocator not GlobalAlloc (it's part of the standard library so it's allowed to use unstable features)

So that interface is allowed to say you asked for 128 bytes but here's 256. Or, more likely, you asked for 940 bytes, but here's 1024. So if you were trying to make a Vec<TwentyByteThing> and Vec::with_capacity(47) it would be practical to adjust this so that when the allocator has 1024 bytes available but not 940 we get back a Vec with capacity 51 not 47.


You misread the documentation. Reserve-exact is precisely that - the growth strategy is ignored and you are ensured that at least that many more elements can be inserted without a reallocation. Eg reserve_exact(100) on an empty Vec allocates space for 100 elements.

By contrast reserve will allocate space for the extra elements following the growth strategy. If you reserve(100) on an empty Vec the allocation will be able to actually insert 128 (assuming the growth strategy is pow(n))


Actually that's not quite correct.

Vec::reserve(100) on an empty Vec will give you capacity 100, not 128 even though our amortization is indeed doubling.

The rules go roughly like this, suppose length is L, present capacity is C, reserve(N):

1. L + N < C ? Enough capacity already, we're done, return

2. L + N <= C * 2 ? Ordinary doubling, grow to capacity C * 2

3. Otherwise, try to grow to L + N

This means we can grow any amount more quickly than the amortized growth strategy or at the same speed - but never less quickly. We can go 100, 250, 600, 1300 and we can go 100, 200, 400, 800, 1600 - but we can''t do 100, 150, 200, 250, 300, 350, 400, 450, 500...


Honestly, i doubt it. That exposes many details to the programmers that many of them would prefer not to know.

The higher level the language, the less interest there is to manually manage memory. It is just something to offload to the gc/runtime/etc.

So, i think this is a no-go. The market wont accept it.


You already don’t have a choice. The reason we are all in the cloud is that hardware stopped scaling properly vertically and had to scale horizontally, and we needed abstractions that kept us from going insane doing that.

If you really want to dumb down what I’m suggesting, it’s is tantamount to blade servers with a better backplane, treating the box as a single machine instead of a cluster. If IPC replaces a lot of the RPC, you kick the Amdahl’s Law can down the road at least a couple of process cycles before we have to think of more clever things to do.

We didn’t have any of the right tooling in place fifteen years ago when this problem first started to be unavoidable, but is now within reach, if not in fact extant.


It's tricky to decode all this but there are a lot of misconceptions.

First, amdahl's law just says that the non parallel parts of a program become more of a bottleneck as the parallel parts are split up more. It's trivial and obvious, it has nothing to do with being able to scale to more cores because it has nothing to do with how much can be parallelized.

Second in your other comment, there is nothing special about "rust having the semantics" for NUMA. People have been programming NUMA machines since they existed (obviously). NUMA just means that some memory addresses are local and some are not local. If you want things to be fast you need to use the addresses that are local as much as possible.


You don't need every programmer to leverage the architecture for the market to accept it, just a few that hyper-optimize an implementation to a highly valuable problem (e.g., ads or something like that).


QuickShell - it should be called


Quicshell*


QSH?


At least that isn’t an existing ham radio Q-code!


That's already a project (library for building a desktop environment).


I think you would find it very difficult to find even a single game which has done this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: