Hacker Newsnew | past | comments | ask | show | jobs | submit | suby's commentslogin

This strikes me as a good idea I've never seen articulated before. Something like a sticky scroll which accrues all off screen cursors, limited to some max to prevent things getting out of hand.

This is a valid concern imo, but I'm not too afraid about nvim itself being compromised. I do think it is risky to be depending on many plugins, which is why I'm hoping nvim can integrate more of the popular plugins into nvim proper.

I am somewhat dismayed that contracts were accepted. It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.

Here's a quote from Bjarne,

> So go back about one year, and we could vote about it before it got into the standard, and some of us voted no. Now we have a much harder problem. This is part of the standard proposal. Do we vote against the standard because there is a feature we think is bad? Because I think this one is bad. And that is a much harder problem. People vote yes because they think: "Oh we are getting a lot of good things out of this.", and they are right. We are also getting a lot of complexity and a lot of bad things. And this proposal, in my opinion is bloated committee design and also incomplete.


I implemented Contracts in the C++ language in the early 90's as an extension.

Nobody wanted it.

https://www.digitalmars.com/ctg/contract.html


I think it's also true that, regardless of the desirability of the feature at the time (which sibling comments discuss eloquently) people who've bought into a language are usually quite wary of also buying into extensions to that language. The very act of ratification by the committee gives this proposal a ‘feature’ that the DMC++ extension lacked in compatibility expectations over time and across implementations — it's not necessarily a comment on the technical quality or desirability of the work itself.

But then why did you add your contract system to D? You implemented your contract system in the "early 90's", and D was released in 2001, so that's near a decade of "nobody wanted it". So then why add them as a core language feature of a new programming language if no one wanted it? Why is it still a core language feature? And why object to C++ finally adding contracts. I just don't get what you're even arguing here.

It's a great question! I simply had faith that it was a good idea.

The reason I started D in the first place is the C++ community was uninterested in any of my ideas for improvement. Ironically, C++ would up adopting a lot of them one by one! Such as contracts!

Contracts did find an enthusiastic constituency in the D community.


Contracts are a good idea, but I find the implementation of them to be clunky. I'd much rather contracts be part of the type system than as function signatures. Using the example in your earlier link, instead of defining day's 1..31 range within the Date-struct invariant, you'd instead declare a "day" type that's an int whose value cannot exceed 31 or be less than 1. This would be checked and enforced anytime a variable of the type is [re]assigned, set as a field, or passed-in as a parameter.

How do you know nobody wanted it?

> How do you know nobody wanted it?

Some imperfect data points on how to judge if a language feature is wanted (or not):

- Discussion on forums about how to use the feature

- Programs in the wild using the feature

- Bug reports showing people trying to use the feature and occasionally producing funny interactions with other parts of the language

- People wanting to do more complex things on the initially built feature by filing features requests. (Showing that there is uptake on the feature and people want to do more fancy advanced things)


I listen to the users. There was never any mention of it.

it could be possible that llms can mak great use of them

> it could be possible that llms can mak great use of them

This is actually a good point. Yes, LLMs have saturated the conversation everywhere but contracts help clarify the pre-post conditions of methods well. I don't know how good the implementation in C++ will be but LLMs should be able to really exploit them well.


The problem with that is that C++26 Contracts are just glorified asserts. They trigger at runtime, not compile time. So if your LLM-generated code would have worked 99% of the time and then crashed in the field... well, now it will work 99% of the time and (if you're lucky) call the contract-violation handler in the field.

Arguably that's better (more predictable misbehavior) than the status quo. But it's not remotely going to fix the problem with LLM-generated code, which is that you can't trust it to behave correctly in the corner cases. Contracts can't make the code magically behave better; all they can do is make it misbehave better.


In my experience, llms don't reason well about expected states, contracts, invariants, etc. Partly because that don't have long term memory and are often forced to reason about code in isolation. Maybe this means all invariants should go into AGENTS.md/CLAUDE.md files, or into doc strings so a new human reader will quickly understand assumptions.

Regardless, I think a habit of putting contracts to make pre- and post-conditions clear could help an AI reason about code.

Maybe instead of suggesting a patch to cover up a symptom, an AI may reason that a post-condition somewhere was violated, and will dig towards the root cause.

This applies just as well to asserts, too. Contracts/asserts actually need to be added to tell a reader something.


> Nobody wanted it.

The fact that the C++ standard community has been working on Contracts for nearly a decade is something that by itself automatically refutes your claim.

I understand you want to self-promote, but there is no need to do it at the expense of others. I mean, might it be that your implementation sucked?


Late nineties is approaching thirty decades ago; if the C++ committee has now been working on this for nearly a decade, that's fifteen to twenty years of them not working on it. It's quite plausible that contracts simply weren't valued at the time.

Also, in my view the committee has been entertaining wider and wider language extensions. In 2016 there was a serious proposal for a graphics API based on (I think) Cairo. My own sense is that it's out of control and the language is just getting stuff added on because it can.

Contracts are great as a concept, and it's hard to separate the wild expanse of C++ from the truly useful subset of features.

There are several things proposed in the early days of C++ that arguably should be added.


I am not sure what the "truly useful features are" if you take into account that C++ goes from games to servers to embedded, audio, heterogeneous programming, some GUI frameworks, real-time systems (hard real-time) and some more.

I would say some of the features that are truly useful in some niches are les s imoortant in others and viceversa.


> Late nineties is approaching thirty decades ago

Boy, this makes me feel old... oh wait :)

(I agree with your point; early 90s vs. mid-10s are two very different worlds, in this context.)


Wow. It’s such a funny typo I wouldn’t correct it now even if I was still able to edit.

So what was it like back in the Egyptian age? :)


> I understand you want to self-promote

Not a very fair assumption. However, even if your not so friendly point was even true, I'd like people who have invented popular languages to "self-promote" more (here dlang). It is great to get comments on HN from people who have actually achieved something nice !


35 years is a lot longer than a decade. C++ should have copied the '= void;' syntax, too!

It should copy Zig's '= undefined;' instead of D's '= void;' The latter is very confusing: why have a keyword that means nothing, but also anything? This is a pretty common flaw within D, see also: static.

Nobody in D was confused by `= void;`. People understood it immediately.

> why have a keyword that means nothing, but also anything?

googling void: "A void is a space containing nothing, a feeling of utter emptiness, or a legal nullity, representing a state of absolute vacancy or lack."

Sounds perfect!


"People" doesn't include me then. I had no idea that D had this feature for quite some time, despite using it fairly often in Zig, because when considering what the equivalent would be to search for, my brain somehow didn't make the leap to the keyword that represents literally nothing. Or as your Google search result says, "representing a state of absolute vacancy or lack." A less inappropriate use of "= void;" would be to zero-out something. I honestly find D's continual misuse of keywords like this to be really off putting and a contributing factor as to why I've stopped using it.

In the early 1990s, C++ had not yet been standardized by ISO, so your argument doesn’t apply to that period.

I can’t speak to the C++ contract design — it’s possible bad choices were made. But contracts in general are absolutely exactly what C++ needs for the next step of its evolution. Programming languages used for correct-by-design software (Ada, C++, Rust) need to enable deep integration with proof assistants to allow showing arbitrary properties statically instead of via testing, and contracts are /the/ key part of that — see e.g. Ada Spark.

C++ is the last language I'd add to any list of languages used for correct-by-design - it's underspecified in terms of semantics with huge areas of UB and IB. Given its vast complexity - at every level from the pre-processor to template meta-programming and concepts, I simply can't imagine any formal denotational definition of the language ever being developed. And without a formal semantics for the language, you cannot even start to think about proof of correctness.

As with Spark, proving properties over a subset of the language is sufficient. Code is written to be verified; we won’t be verifying interesting properties of large chunks of legacy code in my career span. The C (near-) subset of C++ is (modulo standard libraries) a starting point for this; just adding on templates for type system power (and not for other exotic uses) goes a long way.

I don’t think this is a good comparison. Ada (on which Spark is based) has every safety feature and guardrail under the sun, while C++ (or C) has nothing.

There is a lot of tooling for C though, just not in mainstream compilers.

> The C (near-) subset of C++ is (modulo standard libraries) a starting point for this; just adding on templates for type system power (and not for other exotic uses) goes a long way.

In my experience, this is absolutely true. I wrote my own metaprogramming frontend for C and that's basically all you need. At this point, I consider the metaprogramming facilities of a language it's most important feature, by far. Everything else is pretty much superfluous by comparison


I don’t understand this “next evolution” approach to language design.

It should be done at some point. People can always develop languages with more or less things but piling more things on is just not that useful.

It sounds cool in the minds of people that are designing these things but it is just not that useful. Rust is in the same situation of adding endless crap that is just not that useful.

Specifically about this feature, people can just use asserts. Piling things onto the type system of C++ is never going to be that useful since it is not designed to be a type system like Rust's type system. Any improvement gained is not worth piling on more things.

Feels like people that push stuff do it because "it is just what they do".


Many of the recent C++ standards have been focused on expanding and cleaning up its powerful compile-time and metaprogramming capabilities, which it initially inherited by accident decades ago.

It is difficult to overstate just how important these features are for high-performance and high-reliability systems software. These features greatly expand the kinds of safety guarantees that are possible to automate and the performance optimizations that are practical. Without it, software is much more brittle. This isn’t an academic exercise; it greatly reduces the amount of code and greatly increases safety. The performance benefits are nice but that is more on the margin.

One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++.


> One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++.

Aren’t Rust macros more powerful than C++ template metaprogramming in practice?


Rust has two separate macro systems. It has declarative "by example" macros which are a nicer way to write the sort of things where you show an intern this function for u8 and ask them to create seven more just like it except for i8, u16, i16, u32, i32, u64, i64. Unlike the C pre-processor these macros understand how loops work (sort of) and what types are, and so on, and they have some hygiene features which make them less likely to cause mayhem.

Declarative macros deliberately don't share Rust's syntax because they are macros for Rust so if they shared the same syntax everything you do is escape upon escape sequence as you want the macro to emit a loop but not loop itself etc. But other than the syntax they are pretty friendly, a one day Rust bootstrap course should probably cover these macros at least enough that you don't use copy-paste to make those seven functions by hand.

However the powerful feature you're thinking of is procedural or "proc" macros and those are a very different beast. The proc macros are effectively compiler plugins, when the compiler sees we invoked the proc macro, it just runs that code, natively. So in that sense these are certainly more powerful, they can for example install Python, "Oh, you don't have Python, but I'm a proc macro for running Python, I'll just install it...". Mara wrote several "joke" proc macros which show off how dangerous/ powerful it is, you should not use these, but one of them for example switches to the "nightly" Rust compiler and then seamlessly compiles parts of your software which don't work in stable Rust...


No, they are not.

They are both; there are things that Rust's macros can do metaprogramming-wise that C++ templates cannot do and vice-versa.

Rust's macros work on a syntactic level, so they are more powerful in that they can work with "normally" invalid code and perform token-to-token transformations (and in the case of proc macros effectively function as compiler extensions/plugins) and less powerful in that they don't have access to semantic information.


Incorrect.

> powerful compile-time and metaprogramming capabilities

While I agree that, generally, compile time metaprogramming is a tremendously powerful tool, the C++ template metaprogramming implementation is hilariously bad.

Why, for example, is printing the source-code text of an enum value so goddamn hard?

Why can I not just loop over the members of a class?

How would I generate debug vis or serialization code with a normal-ish looking function call (spoiler, you can't, see cap'n proto, protobuf, flatbuffers, any automated dearimgui generator)

These things are incredibly basic and C++ just completely shits all over itself when you try to do them with templates


> Why, for example, is printing the source-code text of an enum value so goddamn hard?

Aside from this being trivial in C++26, imo it isn't actually that tricky. Here's a very quick implementation I made awhile ago: https://github.com/Cons-Cat/libCat/blob/3f54e47f0ed182771fce...


> Aside from this being trivial in C++26

Great, it took them 51 years to make a trivial operation trivial. Call me next millennium when they start to figure out the nontrivial stuff, I guess.


Did you read the article? This is called reflection, and is exactly what C++26 introduces.

Yeah, like 50 years too late.

One of the biggest knocks against Rust as a systems programming language is that it has weak compile-time and metaprogramming capabilities compared to Zig and C++

In the space of language design, everything "more powerful" is not necessary good. Sometimes less power is better because it leads to more optimisable code, less implementation complexity, less abstraction, better LSP support. TL;DR More flexibility and complexity is not always good.

Though I would also challenge the fact that Rust's metaprogramming model is "not powerful enough". I think it can be.


But compile-time processing is certainly useful in a performance-oriented language.

And not only for performance but also for thread safety (eliminates initialization races, for example, for non-trivial objects).

Rust is just less powerful. For example you cannot design something that comes evwn close to expression templates libraries.


> And not only for performance but also for thread safety

This is already built-in to the language as a facet of the affine type system. I'm curious as to how familiar you actually are with Rust?

> Rust is just less powerful.

On the contrary. Zig and C++ have nothing even remotely close to proc macros. And both languages have to defer things like thread safety into haphazard metaprogramming instead of baking them into the language as a basic semantic guarantee. That's not a good thing.


Writing general generic code without repetition for Rust without specialization is ome thing where it fails. It does not have variadics or so powerful compile metaprogramming. It does not come even remotely close.

Proc macros is basically plugins. I do not think thos is even part of the "language" as such. It is just plugging new stuff into the compiler.


> For example you cannot design something that comes evwn close to expression templates libraries.

You keep saying this and it's still wrong. Rust is quite capable of expression templates, as its iterator adapters prove. What it isn't capable of (yet) is specialization, which is an orthogonal feature.


Rust cannot take a const function and evaluate that into the argument of a const generic or a proc macro. As far as I can tell, the reasons are deeply fundamental to the architecture of rustc. It's difficult to express HOW FUNDAMENTAL this is to strongly typed zero overhead abstractions, and we see where Rust is lacking here in cases like `Option` and bitset implementations.

> Rust cannot take a const function and evaluate that into the argument of a const generic

Assuming I'm interpreting what you're saying here correctly, this seems wrong? For example, this compiles [0]:

    const fn foo(n: usize) -> usize {
        n + 1
    }

    fn bar<const N: usize>() -> usize {
        N + 1
    }

    pub fn baz() -> usize {
        bar::<{foo(0)}>()
    }
In any case, I'm a little confused how this is relevant to what I said?

[0]: https://rust.godbolt.org/z/rrE1Wrx36


> Rust is quite capable of expression templates, as its iterator adapters prove.

AFAIU iterator adapters are not quite what expression templates are because they rely on the compiler optimizations rather than the built-in feature of the language, which enable you to do this without relying on the compiler pipeline.


I had always thought expression templates at the very least needed the optimizer to inline/flatten the tree of function calls that are built up. For instance, for something like x + y * z I'd expect an expression template type like sum<vector, product<vector, vector>> where sum would effectively have:

    vector l;
    product& r;
    auto operator[](size_t i) {
        return l[i] + r[i];
    }
And then product<vector, vector> would effectively have:

    vector l;
    vector r;
    auto operator[](size_t i) {
        return l[i] * r[i];
    }
That would require the optimizer to inline the latter into the former to end up with a single expression, though. Is there a different way to express this that doesn't rely on the optimizer for inlining?

Expression templates do not rely on optimizer since you're not dealing with the computations directly but rather expressions (nodes) through which you are deferring the computation part until the very last moment (when you have a fully built an expression of expressions, basically almost an AST). This guarantees that you get zero cost when you really need it. What you're describing is something keen of copy elision and function folding though inlining which is pretty much basics in any c++ compiler and happens automatically without special care.

> since you're not dealing with the computations directly but rather expressions (nodes) through which you are deferring the computation part until the very last moment (when you have a fully built an expression of expressions, basically almost an AST).

Right, I understand that. What is not exactly clear to me is how you get from the tree of deferred expressions to the "flat" optimized expression without involving the optimizer.

Take something like the above example for instance - w = x + y * z for vectors w/x/y/z. How do you get from that to effectively

    for (size_t i = 0; i < w.size(); ++i) {
        w[i] = x[i] + y[i] * z[i];
    }
without involving the optimizer at all?

The example is false because that's not how you would write an expression template for given computation so the question being how is it that the optimizer is not involved is also not quite set in the correct context so I can't give you an answer for that. Of course that the optimizer is generally going to be involved, as it is for all the code and not the expression templates, but expression templates do not require the optimizer in the way you're trying to suggest. Expression templates do not rely on O1, O2 or O3 levels being set - they work the same way in O0 too and that may be the hint you were looking for.

> The example is false because that's not how you would write an expression template for given computation

OK, so how would you write an expression template for the given computation, then?

> Expression templates do not rely on O1, O2 or O3 levels being set - they work the same way in O0 too and that may be the hint you were looking for.

This claim confuses me given how expression templates seem to work in practice?

For example, consider Todd Veldhuizen's 1994 paper introducing expression templates [0]. If you take the examples linked at the top of the page and plug them into Godbolt (with slight modifications to isolate the actual work of interest) you can see that with -O0 you get calls to overloaded operators instead of the nice flattened/unrolled/optimized operations you get with -O1.

You see something similar with Eigen [2] - you get function calls to "raw" expression template internals with -O0, and you need to enable the optimizer to get unrolled/flattened/etc. operations.

Similar thing yet again with Blaze [3].

At least to me, it looks like expression templates produce quite different outputs when the optimizer is enabled vs. disabled, and the -O0 outputs very much don't resemble the manually-unrolled/flattened-like output one might expect (and arguably gets with optimizations enabled). Did all of these get expression templates wrong as well?

[0]: https://web.archive.org/web/20050210090012/http://osl.iu.edu...

[1]: https://cpp.godbolt.org/z/Pdcqdrobo

[2]: https://cpp.godbolt.org/z/3x69scorG

[3]: https://cpp.godbolt.org/z/7vh7KMsnv


Look, I have just completed work on some high performance serialization library which avoids computing heavy expressions and temporary allocations all by using expression templates and no, optimization levels are not needed. The code works as advertised at O0 - that's the whole deal around it. If you have a genuine question you should ask one but please do not disguise so that it only goes to prove your point. I am not that naive. All I can say is that your understanding of expression templates is not complete and therefore you draw incorrect conclusions. Silly example you provided shows that you don't understand how expression template code looks like and yet you're trying to prove your point all over and over again. Also, most of the time I am writing my comments on my mobile so I understand that my responses sometime appear too blunt but in any case I will obviously not going to write, run or check the code as if I had been on my work. My comments here is not work, and I am not here to win arguments, but most of the time learn from other people's experiences, and sometimes dispute conclusions based on those experiences too. If you don't believe me, or you believe expression templates work differently, then so be it.

> If you have a genuine question you should ask one but please do not disguise so that it only goes to prove your point.

I think my question is pretty simple: "How does an optimizer-independent expression template implementation work?" Evidently the resources I've found so far describe "optimizer-dependent expression templates", and apparently none of the "expression template" implementations I've had reason to look at disabused me of that notion.

> My comments here is not work, and I am not here to win arguments, but most of the time learn from other people's experiences, and sometimes dispute conclusions based on those experiences too.

Sure, and I like to learn as well from the more knowledgeable/experienced folk here, but as much as I want to do so here I'm finding it difficult since there's precious little for me to go off of beyond basically just being told I'm wrong.

> If you don't believe me, or you believe expression templates work differently, then so be it.

I want to understand how you understand expression templates, but between the above and not being able to find useful examples of your description of expression templates I'm at a bit of a loss.


Expression templates do AST manipulation of expressions at compile time. Let's say you have a complex matrix expression that naively maps to multiple BLAS operations but can be reduced to a single BLAS call. With expression templates you can translate one to the other, this is a static manipulation that does not depend on compiler level. What does depend on the compiler is whether the incidental trivial function calls to operators gets optimized away or not. But, especially with large matrices, the BLAS call will dominate anyway, so the optimization level shouldn't matter.

Of course in many cases the optimization level does matter: if you are optimizing small vector operators to simd inlining will still be important.


> With expression templates you can translate one to the other, this is a static manipulation that does not depend on compiler level.

How does that work on an implementation level? First thing that comes to mind is specialization, but I wouldn't be surprised if it were something else.

> What does depend on the compiler is whether the incidental trivial function calls to operators gets optimized away or not.

> Of course in many cases the optimization level does matter: if you are optimizing small vector operators to simd inlining will still be important.

Perhaps this is the source of my confusion; my uses of expression templates so far have generally been "simpler" ones which rely on the optimizer to unravel things. I haven't been exposed much to the kind of matrix/BLAS-related scenarios you describe.


Partial specialization specifically. Match some patterns and covert it to something else. For example:

  struct F { double x; };
  enum Op { Add, Mul };
  auto eval(F x) { return x.x; }
  template<class L, class R, Op op> struct Expr;
  template<class L, class R> struct Expr<L,R,Add>{  L l; R r; 
    friend auto eval(Expr self) { return eval(self.l) + eval(self.r); } };
  template<class L, class R> struct Expr<L,R,Mul>{  L l; R r; 
    friend auto eval(Expr self) { return eval(self.l) * eval(self.r); } };
  template<class L, class R, class R2> struct Expr<Expr<L, R, Mul>, R2, Add>{   Expr<L,R, Mul> l; R2 r; 
    friend auto eval(Expr self) { return fma(eval(self.l.l), eval(self.l.r), eval(self.r));}};
  template<class L, class R>
  auto operator +(L l, R r) { return Expr<L, R, Add>{l, r}; } 
  template<class L, class R>
  auto operator *(L l, R r) { return Expr<L, R, Mul>{l, r}; } 

  double optimized(F x, F y, F z) { return eval(x * y + z); }
  double non_optimized(F x, F y, F z) { return eval(x + y * z); }
Optimized always generates a call to fma, non-optimized does not. Use -O1 to see the difference (will inline trivial functions, but will not do other optimizations). -O0 also generates the fma, but it is lost in the noise.

The magic happens by specifically matching the pattern Expr<Expr<L, R, Mul>, R2, Add>; try to add a rule to optimize x+y*z as well.


Hrm, OK, that makes sense. Thanks for taking the time to explain! Guessing optimizing x+y*z would entail something similar to the third eval() definition but with Expr<L, Expr<L2, R2, Mul>, Add> instead.

I think at this point I can see how my initial assertion was wrong - specialization isn't fully orthogonal to expression templates, as the former is needed for some of the latter's use cases.

Does make me wonder how far one could get with rustc's internal specialization attributes...


What "endless crap that is just not that useful" has been added to Rust in your opinion?

returning "impl Trait". async/await unpin/pin/waker. catch_unwind. procedural macros. "auto impl trait for type that implements other trait".

I understand some of these kinds of features are because Rust is Rust but it still feels useless to learn.

I'm not following rust development since about 2 years so don't know what the newest things are.


RPIT (Return Position impl Trait) is Rust's spelling of existential types. That is, the compiler knows what we return (it has certain properties) but we didn't name it (we won't tell you what exactly it is), this can be for two reasons:

1. We didn't want to give the thing we're returning a name, it does have one, but we want that to be an implementation detail. In comparison the Rust stdlib's iterator functions all return specific named Iterators, e.g. the split method on strings returns a type actually named Split, with a remainder() function so you can stop and just get "everything else" from that function. That's an exhausting maintenance burden, if your library has some internal data structures whose values aren't really important or are unstable this allows you to duck out of all the extra documentation work, just say "It's an Iterator" with RPIT.

2. We literally cannot name this type, there's no agreed spelling for it. For example if you return a lambda its type does not have a name (in Rust or in C++) but this is a perfectly reasonable thing to want to do, just impossible without RPIT.

Blanket trait implementations ("auto impl trait for type that implements other trait") are an important convenience for conversions. If somebody wrote a From implementation then you get the analogous Into, TryFrom and even TryInto all provided because of this feature. You could write them, but it'd be tedious and error prone, so the machine does it for you.


Like you said it is possible to not use this feature and it arguably creates better code.

It is the right tradeoff to write those structs for libraries that absolutely have to avoid dynamic dispatch. In other cases it is better to give a trait object.

A lambda is essentially a struct with a method so it is the same.

I understand about auto trait impl and agree but it is still annoying to me


> It is the right tradeoff to write those structs for libraries that absolutely have to avoid dynamic dispatch. In other cases it is better to give a trait object.

IMO it is a hack to use dynamic dispatch (a runtime behaviour with honestly quite limited use cases, like plugin functionality) to get existential types (a type system feature). If you are okay with parametric polymorphism/generics (universal types) you should also be okay with RPIT (existential types), which is the same semantic feature with a different syntax, e.g. you can get the same effect by CPS-encoding except that the syntax makes it untenable.

Because dynamic dispatch is a runtime behaviour it inherits a bunch of limitations that aren't inherent to existential types, a.k.a. Rust's ‘`dyn` safety’ requirements. For example, you can't have (abstract) associated types or functions associated with the type that don't take a magic ‘receiver’ pointer that can be used to look up the vtable.


It takes less time to compile and that is a huge upside for me personally. I am also not ok with parametric polymorphism except for containers like hashmap

Returning impl trait is useful when you can't name the type you're trying to return (e.g. a closure), types which are annoyingly long (e.g. a long iterator chain), and avoids the heap overhead of returning a `Box<dyn Trait>`.

Async/await is just fundamental to making efficient programs, I'm not sure what to mention here. Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.

Actively writing code for the others you mentioned generally isn't required in the average program (e.g. you don't need to create your own proc macros, but it can help cut down boilerplate). To be fair though, I'm not sure how someone would know that if they weren't already used to the features. I imagine it must be what I feel like when I see probably average modern C++ and go "wtf is going on here"


> Reading a file from disk, waiting for network I/O, etc are all catastrophically slow in CPU time and having a mechanism to keep a thread doing useful other work is important.

curious if you have benchmarks of "catastrofically slow".

Also, on linux, mainstream implementation translates async calls to blocked logic with thread pool on kernel level anyway.


Impl trait is just an enabler to create bad code that explodes compile times imo. I didn’t ever see a piece of code that really needs it.

I exclusively wrote rust for many years, so I do understand most of the features fair deeply. But I don’t think it is worth it in hindsight.


> Programming languages used for correct-by-design software (Ada, C++, Rust) ...

A shoutout to Eiffel, the first "modern" (circa 1985) language to incorporate Design by Contract. Well done Bertrand Meyer!


With people still paying to get the compiler, https://www.eiffel.com

eiffel.com is a support tooling company. Eiffel the language has the website <https://www.eiffel.org/>. Both sites are worth a look.

The people who did contracts are aware of ada/spark and some have experience using it. Only time will tell if it works in c++ but they at least did all they could to give it a chance.

Note that this is not the end of contrats. This is a minimun viable start that they intend to add to but the missing parts are more complex.


Might be the case that Ada folks successfully got a bad version of contracts not amenable for compile-time checking into C++, to undermine the competition. Time might tell.

I strongly doubt that C++ is what's standing in the way of Ada being popular.

Ada used to be mandated in the US defense industry, but lots of developers and companies preferred C++ and other languages, and for a variety of reasons, the mandate ended, and Ada faded from the spotlight.

>the mandate ended, and Ada faded from the spotlight

Exactly. People stopped using Ada as soon as they were no longer forced to use it.

In other words on its own merits people don't choose it.


On their own merits, people choose SMS-based 2FA, "2FA" which lets you into an account without a password, perf-critical CLI tools written in Python, externalizing the cost of hacks to random people who aren't even your own customers, eating an extra 100 calories per day, and a whole host of other problematic behaviors.

Maybe Ada's bad, but programmer preference isn't a strong enough argument. It's just as likely that newer software is buggier and more unsafe or that this otherwise isn't an apples-to-apples comparison.


I made no judgement about whether Ada is subjectively "bad" or not. I used it for a single side project many years ago, and didn't like it.

But my anecdotal experience aside, it is plain to see that developers had the opportunity to continue with Ada and largely did not once they were no longer required to use it.

So, it is exceedingly unlikely that some conspiracy against C++, motivated by mustache-twirling Ada gurus, is afoot. And even if that were true, knocking C++ down several pegs will not make people go back to Ada.

C#, Rust, and Go all exist and are all immensely more popular than Ada. If there were to be a sudden exodus of C++ developers, these languages would likely be the main beneficiaries.

My original point, that C++ isn't what's standing in the way of Ada being popular, still stands.


Ada is a greatly designed language and I mean this. The problem Ada has is highly proprietary compilers.

Not having experience myself, how is GNAT?

This is some pretty major conspiracy thinking, and would need some serious evidence. Do you have any?

[flagged]


Okay, on one hand, I'm very curious, but on the other hand, not really on topic for this forum. So I'll just leave a "wut".

The devil is in the details, because standardization work is all about details.

From my outside vantage point, there seems to be a few different camps about what is desired for contracts to even be. The conflict between those groups is why this feature has been contentious for... a decade now?

Some of the pushback against this form of contracts is from people who desire contracts, but don't think that this design is the one that they want.


Problem is contracts mean different things to different people, and that leads standard contracts support being a compromise that makes nobody happy. To some people contracts are something checked at runtime in debug mode and ignored in release mode. To others they’re something rigorous enough to be usable in formal verification. But the latter essentially requires a completely new C++ dialect for writing contract assertions that has no UB, no side effects, and so on. And that’s still not enough as long as C++ itself is completely underspecified.

This contacts was intended to be a minimum viable product that does a little for a few people, but more importantly provides a framework that the people who want everything else can start building off of.

Right, I think the tension here is that we would like contracts to exist in the language, but the current design isn't what it needs to be, and once it's standardized, it's extremely hard to fix.

C++ needs to give itself up and make way for other, newer, modern, language that have far, far fewer baggage. It should be working with other language to provide tools for interop and migration.

C++ will never, ever be modern and comprehensible because of 1 and 1 reason alone: backward compatibility.

It does not matter what version of C++ you are using, you are still using C with classes.


Why should C++ stop improving? Other languages don't need C++ to die to beat it.

Half-serious reason: because with each C++ version, we seem to get less and less what we want and more and more inefficiency. In terms of language design and compiler implementation. Are we even at feature-completeness for C++20 on major compilers yet? (In an actually usable bug-free way, not an on-paper "completion".)

The compiler design is definitely becoming more complicated but the language design has become progressively more efficient and nicer to use. I’ve been using C++20 for a long time in production; it has been problem-free for years at this point. It is not strictly complete, e.g. modules still aren’t usable, but you don’t need to wait for that to use it.

Even C++23 is largely usable at this point, though there are still gaps for some features.


gcc seems to have full C++20, almost everything in 23 and and implemented reflection for 26 which is probably the only thing anyone cares about in 26.

https://en.cppreference.com/w/cpp/compiler_support.html

Funny how gcc seems to be the top dog now, what happened to clang? Thought their codebase was supposed to be easier and more pleasant to work with? Or maybe just more hardcore compiler devs work on gcc?


Feature complete is a pretty hard goal to reach. It sounds like "added all the features" but is closer to "bug compatible across compilers" (not saying there are bugs just that recent versions have removed a lot of wiggle room for implementations)

Also modules was a lot and was kind of the reason it took so long. They are wonderful and I want them but proper implementations (even with many details being implementation defined) required a lot of work to figure out.

Most of the time all the compilers get ahead of the actual release but in that case there were so many uncertainties only rough implementations were available beforehand and then post release they had to make adjustments to how they handled incremental compilation in a user facing way effectively.


Relfection was a desperate need. Useful and difficult to design feature.

There are also things like template for or inplace_vector. I think it has useful things. Just not all things are useful to everyone.


Ironically the C++ standards consortium doesn’t want C to improve anymore and wants people to just use C++. Rules for thee but not for me.

Stabilizing C as the language of the operating system and keeping it simple isn't without benefits.

But I do think the frustration that C++ can no longer be a super set of C is overblown by C++.


C++ isn't great but so far I haven't seen anything I'd rather use.

I think you need to spend more time with literally any tool -- "Haven't seen anything I'd rather used" reads like "Haven't gotten over the initial learning curve with any other tool"

C++ is sub-optimal for almost any task. For low level stuff plain C or maybe Rust. for higher level Python, Lua, or some Lisp. C++ is a weird in-between language that's impossible to hold correctly.


> For low level stuff plain C

The nice thing about C++ is that you can more or less turn it into C, if you want. My C++ code is closer to C than idiomatic, modern C++, but I wouldn't want to miss the nice parts that C++ adds, such as lambda functions and the occasional template for generalization. Pretty much the only thing I'm missing from C are order-independent designated initializers, which became order-dependent in C++, and thus useless.

> "Haven't seen anything I'd rather used" reads like "Haven't gotten over the initial learning curve with any other tool"

What an odd thing to say. I simply don't like certain design decisions in other languages that I've checked out and tried, and therefore do not see any reason to switch. E.g. I tried Rust, but it's absolutely terrible for quick&dirty prototyping, which is my main job.


Some other language need to step up and rewrite/replace LLVM then, because no language that relies on a ~30 million loc backend written in C++ can ever hope to replace it.

Zig plans to make LLVM optional. Rust has Cranelift. Go afaik has no dependencies on the C++ ecosystem including LLVM. Python and some other languages are built with C, not C++. So, progress is being made slowly to replace LLVM as the defacto optimizing code backend. Alternatives are out there, may they compete and win! C++ makes me pessimistic about the future of humanity..

Languages don't write code, people do. No one has rewritten LLVM because it already exists, and such a project would be insanely expensive for little benefit.

A bureau from the top call is not the way to do it.

Just beat it. Ah, not so easy huh? Libraries, ecosystem, real use, continuous improvements.

Even if it does not look so "clean".

Just beat it, I will move to the next language. I am still waiting.


C with classes is a very simplistic view of C++.

I for one can write C++ but I cannot write a single program in C. If the overlap was so vast, I would be able to write good C but I cannot.

I've done things with templates to express my ideas in C++ that I cannot do in other languages, and the behaviour of deterministic destructors is what sets it apart from C. It is comprehensible and readable to me.

I would argue that C++ is modern, since it is in use today. Perhaps your definition of "modern" is too narrow?


I mean the Carbon project exists

But why? You can do everything contracts do in your own code, yes? Why make it a language feature? I'm not against growing the language, but I don't see the necessity of this specific feature having new syntax.

Pre- and postconditions are actually part of the function signature, i.e. they are visible to the caller. For example, static analyzers could detect contract violations just by looking at the callsite, without needing access to the actual function implementation. The pre- and postconditions can also be shown in IDE tooltips. You can't do this with your own contracts implementation.

Finally, it certainly helps to have a standardized mechanisms instead of everyone rolling their own, especially with multiple libraries.


Is a pointer parameter an input, output, or both?

Input.

You are passing in a memory location that can be read or written too.

That’s it.


In terms of contract in a function, you might be passing the pointer to the function so that the function can write to the provided pointer address. Input/output isn't specifying calling convention (there's fastcall for that) - it is specifying the intent of the function. Otherwise every single parameter to a function would be an input because the function takes it and uses it...

I worked on a massive codebase where we used Microsoft SAL to annotate all parameters to specify intent. The compiler could throw errors based on these annotations to indicate misuse.

This seems like an extension of that.


Annotation sounds good. (As long as it is enforced or honored.) which is the best you can do in C++.

A language like C# has true directional parameters. C only truly has “input”


A pointer doesn't necessarily point to memory.

A nitpick to your nitpick: they said "memory location". And yes, a pointer always points to a memory location. Notwithstanding that each particular region of memory locations could be mapped either to real physical memory or any other assortment of hardware.

No. Neither in the language (NULL exists) nor necessarily on real CPUs.

NULL exists on real CPUs. Maybe you meant nullptr which is a very different thing, don't confuse the two.

I don't agree. Null is an artefact of the type system and the type system evaporates at runtime. Even C's NULL macro just expands to zero which is defined in the type system as the null pointer.

Address zero exists in the CPU, but that's not the null pointer, that's an embarrassment if you happen to need to talk about address zero in a language where that has the same spelling as a null pointer because you can't say what you meant.


Null doesn't expand to zero on some weird systems. tese days zero is special on most hardware so having zero and nullptr be the same is importnt - even though on some of them zero is also legal.

Historically C's null pointer literal, provided as the pre-processor constant NULL, is the integer literal 0 (optionally cast to a void pointer in newer standards) even though the hardware representation may not be the zero address.

It's OK that you didn't know this if you mostly write C++ and somewhat OK that you didn't know this even if you mostly write C but stick to pre-defined stuff like that NULL constant, if you write important tools in or for C this was a pretty important gap in your understanding.

In C23 the committee gave C the C++ nullptr constant, and the associated nullptr_t type, and basically rewrote history to make this entire mess, in reality the fault of C++ now "because it's for compatibility with C". This is a pretty routine outcome, you can see that WG14 members who are sick of this tend to just walk away from the committee because fighting it is largely futile and they could just retire and write in C89 or even K&R C without thinking about Bjarne at all.


You can point to a register which is certainly not memory.

Contracts are about specifying static properties of the system, not dynamic properties. Features like assert /check/ (if enabled) static properties, at runtime. static_assert comes closer, but it’s still an awkward way of expressing Hoare triples; and the main property I’m looking for is the ability to easily extract and consider Hoare triples from build-time tooling. There are hacky ways to do this today, but they’re not unique hacky ways, so they don’t compose across different tools and across code written to different hacks.

The common argument for a language feature is for standardization of how you express invariants and pre/post conditions so that tools (mostly static tooling and optimizers) can be designed around them.

But like modules and concepts the committee has opted for staggered implementation. What we have now is effectively syntax sugar over what could already be done with asserts, well designed types and exceptions.


DYI contracts don't compose when mixing code using different DYI implementations. Some aspects of contracts have global semantics.

C++ contracts standardizes what people already do in C++. Where is the complexity in that? It removes the need to write your own implementation because the language provides a standard interoperable one.

An argument can be made that C++26 features like reflection add complexity but I don't follow that argument for contracts.


The quote of Bjarne is a bit out of context. It was made after an hour long talk about the pitfalls and problems of contracts in c++26: https://youtu.be/tzXu5KZGMJk

This should also clarify the complexity issue.


> It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.

This is a common sentiment about C++, but I find it very interesting that everyone seems to have a different feature in mind when they say it.


I think that's a clear and unambiguous point in favor of the argument. There are so many hellishly complex things in C++ that the community can't settle on even a small subset to be the worst contender.

Half Life 3 rules apply. Every time someone complains about complexity in C++, the committee adds a new overly complex feature. It remains a problem because complexity keeps getting shoveled on top of the already complex language.


> I am somewhat dismayed that contracts were accepted. It feels like piling on ever more complexity to a language which has already surpassed its complexity budget, and given that the feature comes with its own set of footguns I'm not sure that it is justified.

I don't think this opinion is well informed. Contracts are a killer feature that allows implementing static code analysis that covers error handling and verifiable correct state. This comes for free in components you consume in your code.

https://herbsutter.com/2018/07/02/trip-report-summer-iso-c-s...

Asserting that no one wants their code to correctly handle errors is a bold claim.


Contracts aren't for handling errors. That blog post is extremely out of date, and doesn't reflect the current state of contracts

Modern C++ contracts are being sold as being purely for debugging. You can't rely on contracts like an assert to catch problems, which is an intentional part of the design of contracts


Just because Bjarne thinks the feature is bad doesnt mean it is bad. He can be wrong. The point is, most peoppe disagree with him, and so a lot of peoppe do think it is good.

There have been several talks about contracts and the somewhat hidden complexities in them. C++ contracts are not like what you'd initally expect. Compiler switches can totally alter how contracts behave from getting omitted to reporting failures to aborting the program. There is also an optional global callback for when a contract check fails.

Different TUs can be compiled with different settings for the contract behavior. But can they be binary compatible? In general, no. If a function is declared in-line in a header, the comoiler may have generated two different versions with different contract behaviors, which violates ODR.

What happens if the contract check calls a helper function that throws an exception?

The whole things is really, really complex and I don't assume that I understand it properly. But I can see that there are some valid concerns against the feature as standardized and that makes me side with the opposition here: this was not baked enough yet


That sounds like the worst kind of misfeature.

It sounds like it should solve your problem. At first it seems to work. Then you keep on finding the footguns after it is too late to change the design.


Contracts are designed as a minimum thing that can work. The different groups who want different - conflicting - things out of contracts now have a common place and syntax examples to start adding what they want without coming up with something that either breaks someone else, or worse each group doing things in a non-uniform way thus causing foot guns.

Contracts as they are today won't solve every problem. However they can expand over time to solve more problems. (or at least that is the hope, time will tell - there is already a lot of discussion on what the others should be)


I think that a "minimal viable baseline" type implementation should not break the ODR.

In Rust these types of proposals are common, in C++ less so. The incredibly tedious release process encourages everyone to put in just as much complexity as they can safely get away with.


Coroutines went through the same cycle. Standardized in C++20, and I still hit compiler-specific differences in how symmetric transfer gets lowered.

That's a genius idea, keep adding broken stuff into the standard until there's no choice but to break compatibility to fix it.

No no no, you add new stuff that will totally fix those problems!


>to a language which has already surpassed its complexity budget

I've been thinking that way for many years now, but clearly I've been wrong. Perhaps C++ is the one language to which the issue of excess complexity does not apply.


In essence, a standard committee thinks like bureaucrats. They have little to no incentive to get rid of cruft and only piling on new stuff is rewarded.

In D, we are implementing editions so features that didn't prove effective can be removed.

I don't know what you mean by effective - I can come up with several different/conflicting definitions in this context.

I think what you meant to say is popular. If a feature is popular it doesn't matter how bad it turns out in hindsight: you can't remove it without breaking too much code (you can slowly deprecate it over time, I'm not sure how you handle deprecation in D, so perhaps that is what editions give you). However if a great feature turns out not to be used you can remove it (presumably to replace it with a better version that you hope people will use this time, possibly reusing the old syntax in a slightly incompatible way)


I am sadly not in the position to use D at work, but I appreciate your work!

Yeah dude but you've really marketed D poorly. I remember looking at D what must be 15 years back or so? And I loved the language and was blown away by its beauty and cool features. But having no FOSS compiler and the looming threat of someone claiming a patent (back then it was unclear that Mono/C# was "legal" and even Java hung in the balance) was too scary for me to touch it.

Now I'm old and I believe D has missed its opportunity. Kinda sad.


D is 100% open source. The gnu D compiler and the LLVM D compiler were always 100% open source.

I don't recall anyone making a patent claim.

Open source and free software isn't the same thing. Nobody made a claim on Java either, until someone did. I just distinctly remember explicitly not exploring D for that reason. Also this way way before LLVM and I also don't think GNU had a D compiler back then. There was only the (and I really believe it was closed source) Digital Mars compiler.

15 years ago, both LLVM and GNU had a D compiler. gdc (the GNU compiler) was not an official part of the gcc collection, but it was definitely there and 100% open source.

All three compilers shared the open source D front end. The DMD backend source code was available for anyone to use, it just couldn't be redistributed. We were eventually able to fully Boost license it.

The DMD compiler always had source available for free from Digital Mars. I never sold a single copy :-)


The scheme folks managed to shed complexity between R6RS and R7RS, I believe.

So perhaps I think the issue is not committees per se, but how the committees are put together and what are the driving values.


Notably they didn't fully shed it, they compartmentalized it. They proposed to split the standard into two parts: r7rs-small, the more minimal subset closer in spirit to r5rs and missing a lot of stuff from r6rs, and r7rs-large, which would contain all of r6rs plus everyone's wildest feature dreams as well as the kitchen sink.

It worked remarkably well. r7rs-small was done in 2013 and is enjoyed by many. The large variant is still not done and may never be done. That's no problem though, the important point was that it created a place to point people with ideas to instead of outright telling them "no".


Is there any good documentation about contracts? https://en.cppreference.com/w/cpp/language/contracts.html is incredibly confusing - its first displayed example seems to be an edge case where the assertion itself causes a mutation?

https://en.cppreference.com/w/cpp/language/function.html#Fun... is vaguely better, but still quite dense.

IMO the syntax makes things hard for a newcomer to the syntax to understand, which I see as core to any programming language's goals of community.

    double square_root(double num) asserts_pre(num >= 0)
would have been far more self-evident than just

    double square_root(double num) pre(num >= 0)
But I suppose brevity won out.

I believe that https://isocpp.org/files/papers/P2900R14.pdf is the paper, which doesn't mean it's good documentation, as it's meant for modifying the standard. However, in its early sections, it does link to other papers which have more information, and the "proposed wording" section should be where the standardize lives, with the rest of it being context.

Contracts are already informally a thing: most functions have preconditions, and if you break those preconditions, the function doesn't make any guarantees of what it does.

We already have some primitive ways to define preconditions, notably the assert macro and the 'restrict' qualifier.

I don't mind a more structured way to define preconditions which can automatically serve as both documentation and debug invariant checks. Though you could argue that a simpler approach would be to "standardize" a convention to use assert() more liberally in the beginning of functions as precondition checks; that a sequence of 'assert's before non-'assert' code should semantically be treated as the functions preconditions by documentation generators etc.

I haven't looked too deep into the design of the actual final contracts feature, maybe it's bad for reasons which have nothing to do with the fundamental idea.


I wonder if C++ already has so much complexity, that it would actually be a good idea to ignore feature creep, and implement any feature with even the most remote use-case.

It sounds (and probably is) insane. But if a feature breaks backwards compatibility, or can't be implemented in a way that non-negligibly affects compiler/IDE performance for codebases that ignore it, what's the issue? Specifically, what significant new issues would it cause that C++’s existing bloat hasn’t?

C++20 isn't fully implemented in any one compiler (https://en.cppreference.com/w/cpp/compiler_support.html#C.2B...).


GCC and MSVC are pretty close. fyi, the tables on cppreference are rather outdated at this point. I made a more up-to-date, community-maintained site: https://cppstat.dev/?conformance=cpp20

On the main page, it took me a minute after laughing at "std::byte arithmetic" to realize it was April 1st. All of those C++29 "features" are on point, very funny. Though surely there's somewhere SFINAE can be mentioned...

wow, that's weird. One would think that updating the reference table is something a team or individual - who just spent a lot of time and effort on implementing a feature - would also do.

For a while now cppreference.com has been in "temporary read-only mode" in which it isn't updated. Eventually I expect a "temporary" replacement will dominate and eventually it won't be "temporary" after all. Remember when some of Britain's North American colonies announced they were declaring independence? Yeah me either, but at the time I expect some people figured hey, we send a bunch of troops, burn down some stuff, by Xmas we'll have our colonies back.

Why does this need to access to all my repository just for generating a PR?

I am not aware of such a thing. Is GitHub itself requesting that when you create a PR, or is it because of the site's editor?

The editor. Otherwise, the site looks great!

> So go back about one year, and we could vote about it before it got into the standard, and some of us voted no. Now we have a much harder problem. This is part of the standard proposal.

Offtopic, but this is a problem in the web world, too. Once something is on a standards track, there are almost mechanisms to vote "no, this is bad, we don't need this". The only way is to "champion" a proposal and add fixes to it until people are somewhat reasonably happy and a consensus is reached. (see https://x.com/Rich_Harris/status/1841605646128460111)


C++ isn't the first language to do things, but was/is often the first mainstream language to do things.

And then people complain about C++ for doing it wrong, or its complexity, and show language 'X' that does it better/right, but only because they saw C++ do it first, and 'not quite right'.

I expect contracts to be similar - other languages will watch, learn, and do version two, and then complain about c++, etc.

It took 'quite a while' to get rid of auto_ptr, for example.

If it wasn't for the fact this is a language feature, it would be better off in boost where it can be tested in the wild.


Can you share what aspects of the design you (and Stroustroup) aren't happy with? Stroustroup has a tendency of being proven right, with 1-3 decade lag.

Certainly we can say that Bjarne will insist he was right decades later. We can't necessarily guess - at the time - what it is he will have "always" believed decades later though.

You made me laugh!...Bjarne indeed can't be accused of being a modest man. And by some accounts, he's quite a political animal.

But in fairness, when was D&E first published? Argued for auto there, long before their acceptance. Argued for implicit template instantiation - thank god the "everything-must-be-explicit" curmudgeons were vanquished there, too.

He's got a pretty good batting average - certainly better than Herb Sutter.


Well thats not always true. Initializer list is a glaring example. So are integer promotion some other things like

Integer promotion? - Stroustroup pleads C source compat else stillborn.

Initializes lists suck mainly because of C source compat constraints, too. In fact, most things that suck in C++ came from B via C.


I mean... it's C++. The complexity budget is like the US government's debt ceiling.

Has any project ever tried to quantify a “complexity budget” and stick to it?

I’m fascinated by the concept of deciding how much complexity (to a human) a feature has. And then the political process of deciding what to remove when everyone agrees something new needs to be accepted.


Geez if Bjarne thinks it's

> bloated committee design and also incomplete

That's truly in that backdoor alley catching fire


Without a significant amount of needed context that quote just sounds like some awkward rambling.

Also almost every feature added to C++ adds a great deal of complexity, everything from modules, concepts, ranges, coroutines... I mean it's been 6 years since these have been standardized and all the main compilers still have major issues in terms of bugs and quality of implementation issues.

I can hardly think of any major feature added to the language that didn't introduce a great deal of footguns, unintended consequences, significant compilation performance issues... to single out contracts is unusual to say the least.


It doesn't sound that way to me, but there's a lot of context at https://youtu.be/tzXu5KZGMJk?t=3160

There is a roadmap and github issue tracking what is needed for 1.0.

https://github.com/neovim/neovim/issues/20451

https://neovim.io/roadmap/


Quality is not the issue. We should be more specific - Microsoft has been consciously employing dark patterns that they know will be harmful to users but they do not care because their incentives are misaligned. Employees of Microsoft have surely had meetings scheming of ways to degrade user experience for some internal metric they are trying to hit. A decade+ of conscious, willful decisions which negatively impact users.

I personally will never forgive them for uploading the entirety of my users dir to OneDrive without asking for permission. They're --still-- doing this. Whatever decision making process they have in place that not only cooked this scheme up, but allowed it to continue for years must be broken beyond repair. It's contemptuous, backwards, and hostile to users. It cannot be condemned enough.

This blog post talks about taskbar positioning and vaguely gesturing at quality, which is whatever. I'm not mad about removing features or even a higher incidence of bugs. I'm mad about hostile dark patterns that they have consciously chosen to employ at an ever increasing rate. I don't think you can fix this without drastic company wide changes.

For as long as I live, if I have a choice, I will avoid Microsoft products. They cannot be trusted.


Precisely. When I noticed the vagueness of most of those points, I did a Ctrl+F for "OneDrive". It produced zero results. Which is all I needed to know about Microsoft's sincerity. In fact, let me quote the first two items on that list in their entirety, then comment on them:

> More taskbar customization, including vertical and top positions: > Repositioning the taskbar is one of the top asks we’ve heard from you. We are introducing the ability to reposition it to the top or sides of your screen, making it easier to personalize your workspace.

Yes. Not being able to reposition the taskbar is definitely the biggest problem that users have been complaining about. They don't care about Recall trying to store screenshots in an insecure database, or OneDrive uploading copies of all their data without asking permission. It's being able to put the taskbar on the side of the screen that they care about most.

(To be fair, people do care about this and it's not at all a bad thing that they're giving more options back. It's just not deserving of the #1 spot).

> Integrating AI where it’s most meaningful, with craft and focus: > You will see us be more intentional about how and where Copilot integrates across Windows, focusing on experiences that are genuinely useful and well‑crafted. As part of this, we are reducing unnecessary Copilot entry points, starting with apps like Snipping Tool, Photos, Widgets and Notepad.

Notice this does not say "You will have one checkbox, prominently placed in the Settings app, that says 'Turn Copilot off entirely, remove it from my computer, and never mention it again until I uncheck this box'." Nope, they're still going to push Copilot in unnecessary places, they're just going to be more subtle about it.


It's basically the blog post version of "Yes | Remind me later". I have very little trust at this point that anything tied to short term financial interest will change (OneDrive, AI, Recall, Microsoft accounts vs local accounts, data collection etc.). Obviously (to us, but apparently not to them) it is ultimately tied to long term financial interest. I don't believe they are able to see that based on the decisions they've been taking the last decade.


Short-term profits vs long-term profits is a tension that many companies end up making bad choices about. A comment elsewhere in this discussion by safety1st mentioned the stages companies tend to go through. First, growth, growing market share by providing a good product. Then, in safety1st's words, "TAKING PROFITS wherever it can find them. This includes but is not limited to cutting back on its investment in product, as much as it can. If it can cut budgets and quality and give that money to the shareholders it will. If it can inject ads into the product or resell your data it will." Then decline, because you've focused so hard on short-term profit that you've sacrificed the long term.

The only part of safety1st's comment that I disagree with is that all companies go through this. It's not true. Most do, but not all. There are SOME companies that manage to resist short-term thinking, and remain focused on long-term profit by continuing to make good products that their customers like. They do tend to be the ones owned by a small number of people, rather than publicly traded, but they exist. To name just one, McMaster-Carr. They still make quality machine parts, just as they did about a hundred twenty-five years ago when they got started, and from all reports their customers remain quite happy with them. Their website is one of the better websites out there, too.

But yes, Microsoft is falling victim to the short-term vs long-term thinking trap that so many other companies fall into. They're trying to claim, in this post, that they're going back to thinking long-term, i.e. NOT alienating the customers you'll still need ten, twenty, thirty years from now. But I don't think they're truly shifting their mindset.


All the things you wrote, but also quality. Quality is a giant issue. Garbage usability. Excel is the only Microsoft product I thoroughly respect. The amount of things you can't do in PowerPoint for example is mind blowing.


Excel has many classical dialog boxes with text that becomes too small on high resolution displays. May be they fixed in newer versions (mine is 2021).

Excel is better than other spreadsheet alternatives I have seen though, but is mo longer a software I call trouble-free.


Why will you not “forgive them”? That is a logical consequence of using a Microsoft operating system.


Making (and uploading) a copy of your data, which might include private documents or corporate secrets that you're contractually obligated NOT to disclose, without asking you permission? That's a "logical consequence" of using a Microsoft OS? If so, then that's the best argument I've heard for not trusting Microsoft.


I dont want to suffer the consequences of using a microsoft operating system. Id rather use linux or even mac


It is not and let's not be silly now.


> And how would they be able to "push stuff down people's throats" if people could walk away towards alternatives?

It's a forcing of their narrow opinion on what should be allowed onto the ecosystem at large, because all of these things are connected. You can leave to a different DE/distro, but if every DE is doing its own thing for global hotkeys or whatever, then software in the ecosystem is going to be hacky/bespoke or have an unreasonable maintenance burden.

Even if you in particular can move elsewhere the ecosystem is still held back. We only recently got consensus on apps being able to request a window position on screen, which is something x11, macos, and windows all allow you to do. CSD and tray icons are other examples of things found everywhere else that they did not want to support. Some applications are just broken without tray icon support.

This bleeds over into work for folks releasing software for Linux in general. By not supporting SSD they were pushing the burden of drawing window decorations onto every single app author, and while most frameworks will handle this, it's not like everyone is using qt or gtk. App authors will get bug reports and the burden of releasing software on Linux needlessly climbs again.

Hard to convey how unreasonable I feel their stance was on tray icons / SSD. It should be the domain of the DE from a conceptual but also practical point of view, even from just the amount of work involved. It reminds me of LSP's enabling text editors to have great support for every language. And again, Gnome was the odd man out in this, they want extra attention and work when Linux is the lowest desktop marketshare by far, and they themselves are not the overwhelming majority but they are large enough that you really do need to make sure your software runs well on Gnome even if you want to support Linux.

People think Gnome push stuff down your throat because they have the power and influence to impact the ecosystem, and they use that power and influence to die on absolutely absurd hills.


I dunno, I think tray icons support is kind of the absurd hill to die on. They're a Windows 95-ism and generally extremely horrible in terms of usability. Apps use them and desktop environments support them mostly out of a lack of imagination, and they are frankly extremely overused.

I'm personally a KDE user, but I'm with the GNOME folks on this one.


They may have been introduced in Windows 95, but they didn't actually become particularly common until years later. They weren't originally intended as a long-term feature and, in Win 2000, Microsoft started recommending that people use custom Control Panel objects or MMC console snap-ins instead. But the MMC wasn't an option in Win98/Me and, by the time MS finally managed to produced a consumer variant of NT, use of the system tray had become entrenched.

I'm not sure what Windows is like these days, but in MacOS they're patently absurd. My corporate Mac laptop has twelve of the fucking things, and I've never actually had genuine need to click on any of them (and 5 of them are from Apple and so of course use 4 different corner radii between them - the 3rd party ones are at least a little more consistent).


It's possible I have no idea what I'm talking about, but my understanding is that nixos relies on fetching things from third party URLs which may simply die. I feel a bit misled by the promises of nixos, because I cannot actually take the configuration files in 10 years and setup the system again due to link rot.

I was also under the impression that I could install DE's side by side on nixos and not have things like one DE conflicting with files from another DE, but this apparently isn't true either - I installed KDE, and then installed Sway and Sway overwrote the notification theming for KDE.

NixOS is very impressive but the marketing around it feels misleading. The reproducible claim needs a giant asterisk due to link rot.


Every system and package manager will be affected if it cannot download source code to build a package.

NixOS less so, because pretty much all source downloads that are not restricted by license are a separate output that will therefore be stored on (and downloadable from) NixOS cache servers.

I'm not sure what your expectation for this is in general, nobody can just wish into existence data that is just gone.


> I installed KDE, and then installed Sway and Sway overwrote the notification theming for KDE.

That sounds like you are using the nixpkgs modules to install the desktop environments (programs.sway.enable) rather than only installing the packages (environment.systemPackages = [pkgs.sway];). Both are valid, but there is a key difference:

The packages themselves cannot cause such conflicts because each package is just a separate folder under /nix/store. Nixpkgs modules both install the package _and_ configure additional settings which means that modules can interact with each other and possibly cause conflicts.

You can see it in the source for nixpkgs. This config block is applied whenever programs.sway.enable is true: https://github.com/NixOS/nixpkgs/blob/0590cd39f728e129122770...

It installs the sway package here: https://github.com/NixOS/nixpkgs/blob/0590cd39f728e129122770...

But it also edits other settings like adding sway to the display manager session packages list: https://github.com/NixOS/nixpkgs/blob/0590cd39f728e129122770...

What surprises/confuses me is: sway doesn't have notifications. You have to install your own (personally I use mako). So I'm wondering if this was a much greater change like system-wide gtk/qt theming, or did you also set up a notification daemon for sway and perhaps it was the module for _that_ which altered your KDE notifications.

(Just to be clear: I believe you that your notifications looked different, I'm just wondering about the mechanism through which they were altered.)

Personally, I don't like surprises so for 90+% of my config, I only install the packages (environment.systemPackages) rather than using the modules (programs.sway.enable) but that does mean I am essentially re-inventing the module for each package inside my own config which is a lot more work and requires a lot more nix proficiency.


> NixOS is very impressive but the marketing around it feels misleading. The reproducible claim needs a giant asterisk due to link rot.

It's a valid concern, though perhaps worth mentioning you will be able to restore your 10-year old config as long as the files downloaded from now-broken links are still in the Nix cache. Of course in practice, this is only useful to large organizations that have resources to invest in bespoke infrastructure to ensure supply chain integrity, since any `nix store gc` run will immediately wipe all downloads :(


I don't know why Blade was decided on, but it was started by Kvark who worked on WGPU for Mozilla for some time. I assumed it would be a good library on that basis.


kvark was also involved in the initial implementation of zed's blade backend... which probably contributed.


I harbor similar sentiments, but I understand why OpenAI, Anthropic, Zed, etc begin with a macOS version. They're able to target a platform which is a known quantity and a good jumping off point to Linux.

I'm writing software for Linux myself and I know that you run into weird edge case windowing / graphical bugs based on environment. People are reasonably running either x11 or wayland (ecosystem is still in flux in transition) against environments like Gnome, KDE, Sway, Niri, xfce, Cinnamon, labwc, hyprland, mate, budgie, lxqt, cosmic... not to mention the different packaging ecosystem.

I don't blame companies, it seems more sane to begin with a limited scope of macOS.


I would have a very hard time accepting it to be true. They don't even strongly make the claim that it is true, they are correlating areas of the brain associated with memory as growing larger, and they are associating a larger brain area with better cognition, but later in the article indicate that there are some areas of the brain which are associated with memory which actually shrink.

I think we should question research which is overwhelmingly against our common experience of life. My memory is absolutely shot when I am consuming weed regularly. It's not particularly subtle, it is noticeably worse. I suppose there is room for a situation in which it is worse while I am heavily using, but if I were to cease maybe it will rebound and settle at a point which is better than it would have been had I not consumed any cannabis... but I don't see any reason to believe this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: