Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
GNU lightning: generates assembly language code at run-time (gnu.org)
161 points by mpsq on April 8, 2021 | hide | past | favorite | 75 comments


There's both a COPYING file that has the GPLv3 in it, and a COPYING.LESSER file with the LGPL in it. I can't tell what's meant by that. Dual licensed? Or portions GPLv3 and portions LGPL?


Since GPLv3, {L,}GPL have been "grand unified" to simplify things. Like code reuse, LGPLv3 is a just small patch on top of GPLv3 that says "given Condition X, you don't have to follow Clause Y in GPLv3", it's literally just a few short paragraphs. Without the original GPLv3 text, LGPLv3 is meaningless, like a running an executable without its shared library.

Hence, although many people don't realize it, technically every project that licenses itself under LGPLv3 must contain a copy of GPLv3.


Look at the actual files for copyright (or copyleft, if you will): http://git.savannah.gnu.org/cgit/lightning.git/tree/lib/ligh...

Looks like the libraryis lgpl,I couldn't find licensefor the docs themselves (I assume they use document license (COPYING.DOC)).

There seemsto be at least one example program that is gpl: http://git.savannah.gnu.org/cgit/lightning.git/tree/doc/body...


Helpful, thanks. Feels like they should summarize it somewhere though, since GPLv3 and LGPL are wildly different.


The text of the LGPL relies on the text of the GPL.


The LGPLv3 is implemented as a set of additional permission on top of the GPLv3. The library is LGPL.


It's under LGPL. The LGPL allows you to relicense your work under the GPL, if you want.


Why does it only have six registers? The end user probably doesn't want to write a register allocator.


"Six registers are not very much, but this restriction was forced by the need to target CISC architectures which, like the x86, are poor of registers; anyway, backends can specify the actual number of available registers with the calls JIT_R_NUM (for caller-save registers) and JIT_V_NUM (for callee-save registers)." (https://www.gnu.org/software/lightning/manual/lightning.html...)

There used to be a boardly similar library called libjit https://www.gnu.org/software/libjit/ which was slightly higher level: It offered an arbitrary number of virtual registers. It took care of register allocation, I don't recall how sophisticated it was. The docs call it "global" allocation, i.e., taking the whole function into account at once, as opposed to local allocation that considers blocks in isolation.


It wasn't very sophisticated when I used it 12 years ago, but it was "good enough". I know work has been done since then on the register allocator. No idea what the current state is.


Might not be the reason, but six registers make it trivial to write a backend for 32-bit x86 CPUs, since they offer 8 general-purpose registers minus 2 typically used for stack pointer and base (ESP, EBP respectively). That leaves 6 registers usable by compilers: EAX, EDX, ECX, EBX, ESI, EDI.

To my knowledge, any other architecture targeted by GNU lightning offers more registers, so this magic number is effectively the largest number of registers one can offer where register allocation would become trivial for all architectures.


Does anyone know how this compares with libgccjit ? What are the difference in their use cases?

(I note that Guile picked lightning, and emacs chose libgccjit for instance for what seems to be similar roles.)


Guile uses lightening (with an "e"), which is derived from GNU lightning: https://gitlab.com/wingo/lightening


... which is essentially the same as the first version of lightning before the maintainer changed and wrote an optimizer. The first version was literally just a header file emitting constant instruction patterns.


Libgccjit is based on regular gcc, which is a heavyweight compiler. The 'jit' is somewhat of a misnomer; you wouldn't want to actually use that for any of the same cases as a traditional jit (e.g. browsers). (LLVM jit has the same problems.)

I think the main point of libgccjit is actually to provide a cleaner interface than GIMPLE to gcc.


> you wouldn't want to actually use that for any of the same cases as a traditional jit

Well the most high-profile project using libgccjit right now is Elisp (feature/native-comp GNU Emacs branch, which by all indications will eventually make it into the official release), and using GCC as a JIT compiler is how Embeddable Common Lisp has always worked.


Most of the elisp code you run is largely static. It does change, but only gradually; when you update a package, or make a small tweak to your config, you only have to recompile a few files. The elisp isn't being JITed, it's being compiled into shared objects which are cached aggressively. Completely different kettle of fish.

> Embeddable Common Lisp

ECL compiles to c.


> Most of the elisp code you run is largely static. It does change

And JavaScript programs almost never change after a website is loaded. That has nothing to do with whether the programs are compiled.

> you only have to recompile a few files. The elisp isn't being JITed, it's being compiled into shared objects which are cached aggressively

You do not need any files to compile Elisp code, and you do not have to save the bytecode anywhere.

> Completely different kettle of fish.

What exactly is your definition of JIT? I think you might be confusing "JIT" with dynamic recompilation.

> ECL compiles to c.

Ok, and what do you think happens to C?


>> Most of the elisp code you run is largely static. It does change

> And JavaScript programs almost never change after a website is loaded

No, but new javascript programs are frequently loaded.

> You do not need any files to compile Elisp code, and you do not have to save the bytecode anywhere.

You do not need to do any of this by hand, but it still happens; emacs does it automatically.

> What exactly is your definition of JIT? I think you might be confusing "JIT" with dynamic recompilation.

The term is somewhat nebulous. I don't think it's appropriate to apply it to the result of a traditional compilation process that occurs immediately prior to execution. If (for instance) I write a script in c and run it with tcc -run, what happens is that that file is compiled—very similarly to a ‘traditional’ AOT, though the dynamic linker is bypassed—and the result is run.

Essential to JIT, in my view, is not dynamic recompilation, but plain dynamic compilation. Compilation should occur concurrently with execution, and is usually accompanied by some heuristic for deciding what to compile first (e.g. tracing) as well as an interpreter. Wikipedia agrees:

> JIT is a way of executing computer code that involves compilation during execution of a program

  ______________________________________
> what do you think happens to C?

Under most implementations, it is statically compiled to a native binary. Even if it is not, however, ECL does not JIT c, but produces it statically.


> No, but new javascript programs are frequently loaded.

This is probably what confuses a lot of people into thinking that program distribution has anything to do with JIT. JavaScript is the only programming language for which the main method of program distribution is source-code-only, and why all the trade-offs of adaptive optimization seem necessary, to the point that now people are making its necessity into a virtue. Likewise for Java stealing credit for popularizing portable bytecode as the main method of program distribution - although Gosling coined the term JIT years before Java had a JIT compiler.

> If (for instance) I write a script in c and run it with tcc -run

And what happens when you use libtcc?

> Compilation should occur concurrently with execution

Execution of what? You do not need an interpreter or a bytecode VM to have a JIT compiler.

> usually accompanied by some heuristic for deciding what to compile first (e.g. tracing)

That is adaptive optimization, and is orthogonal to compiling (you can have adaptive optimizations in an interpreter or a bytecode VM). Tracing (which is not a heuristic btw) is a variation of profile-guided optimization, which is orthogonal to JIT.

> Even if it is not, however, ECL does not JIT c, but produces it statically.

I posted this elsewhere in this thread:

https://gitlab.com/embeddable-common-lisp/ecl/-/blob/develop...

Can you explain why you think that is not a JIT.


I don't think ECL uses GCC as a JIT; most common lisps use an incremental AOT compiler, which is different. ECL also has an interpreter, but I'm pretty sure it doesn't use GCC.


> I don't think ECL uses GCC as a JIT; most common lisps use an incremental AOT compiler, which is different

https://gitlab.com/embeddable-common-lisp/ecl/-/blob/develop...

What is the difference?


A JIT compiles as the code is run, while an AOT compiler compiles before the code is run:

  (compile (lambda (x) x))
Compiles the lambda immediately.

[edit]

https://en.wikipedia.org/wiki/Dynamic_compilation discusses Dynamic Compilation vs Incremental Compilation.


> A JIT compiles as the code is run, while an AOT compiler compiles before the code is run

By that definition using the REPL or loading source files in SBCL qualifies as a JIT.

> https://en.wikipedia.org/wiki/Dynamic_compilation discusses Dynamic Compilation vs Incremental Compilation.

That article just seems to perpetuate the confusion: "Runtime environments using dynamic compilation typically have programs run slowly for the first few minutes, and then after that, most of the compilation and recompilation is done and it runs quickly."

This could be talking about adaptive optimization, or dynamic recompilation, or even dispatch caching. You can do adaptive optimization in an interpreter or bytecode VM. Dispatch caching does not involve anything that looks like a compiler at all unless you are doing inlining.

Most importantly, none of those things is what GNU Lightning or libgccjit does. All they do is generate machine code, at run-time. That is what a JIT is.


> By that definition using the REPL or loading source files in SBCL qualifies as a JIT.

When SBCL compiles rather than interprets at the REPL, that arguably qualifies as a JIT. Same for loading top-level forms that aren't DEFUN. I'm pretty sure that ECL uses an interpreter in those cases though.


Two people (you and moonchild) reply to my statement that ECL uses a C compiler as a JIT, and come to the exact opposite conclusions about what a JIT is. I think people are really confused about this topic.

Again, look at the code and explain how that is not JIT: https://gitlab.com/embeddable-common-lisp/ecl/-/blob/develop...


If compilation is explicit, it's not a JIT. That's why I said SBCL's LOAD and REPL (which decides between interpreting and compiling) is arguably JIT.

When you explicitly compile (e.g. via ASDF which separates the compile and load phase with FASLs between) in any lisp, that doesn't meet the bar for being a JIT anymore than a C program that distributes plugins as source and compiles them to load the .so is.


> If compilation is explicit, it's not a JIT.

You have machine code generation at run-time, you can then make arbitrary decisions for when to do the code generation. Explain how this is not a JavaScript JIT compiler when you run it in ECL: https://github.com/akapav/js/blob/master/js.lisp#L72

> That's why I said SBCL's LOAD and REPL (which decides between interpreting and compiling) is arguably JIT.

SBCL does not really have an interpreter (it started out without any kind of interpreter at all), it only has a compiler. No wonder you misunderstood my point about SBCL.

> that doesn't meet the bar for being a JIT anymore than a C program that distributes plugins as source and compiles them to load the .so is.

Let's try to rephrase that a little bit: "that doesn't meet the bar for being a JIT anymore than a [web browser] program that distributes [loads] plugins [JavaScript] as source and compiles them to load the .so [machine code] is."


SBCL has two interpreters, IIRC. You can configure SBCL to use an interpreter.

But neither SBCL nor ECL has a JIT. A JIT is a subsystem which on its own decides to compile an intermediate representation (usually some compiled byte code) to native code. Typically the JIT decides when and how to compile it, often using runtime information to compile runtime optimized code.

In both SBCL and ECL, the Lisp is either configured to compile AOT or the user calls the compiler. If there is a source interpreter running the source, the user has to explicitly call COMPILE and COMPILE will not use any runtime information on its own - like using statistics how often functions are being use with which argument types and compile code versions for different argument types. It will also not compile new optimized versions on additional runtime information. The current Lisp compiler will only use the static information which is in the source.

JIT systems OTOH watch the byte code execution and compile code on its own to optimized/native versions, based on the runtime information the JIT system observes.

In basically most Common Lisp implementation, the decision to what kind of code and when it is compiled/executed is left to the user.

Though I think there was a CLISP experiment with compiling CLISP byte codes to native with a JIT. https://sourceforge.net/p/clisp/mailman/clisp-devel/thread/e...

ECL has three ways to run code: an interpreter, a byte code engine and a Lisp-to-C compiler (files, and incremental via files). In theory one could use a JIT to compile the byte codes to native code at runtime. I don't know if this has been tried.

Some experiments of improving CLOS speed at runtime (similar to what Julia does) might be considered to be a moral equivalent of a JIT - but I haven't checked it out.

https://github.com/numcl/specialized-function


> SBCL has two interpreters, IIRC. You can configure SBCL to use an interpreter.

SBCL's original "interpreter" is %simple-eval, which just calls the compiler. SB!EVAL was added sometime around 2008, but as you point out it is not used unless you set evaluator-mode to something other than :compile. The default code path goes through %simple-eval and the compiler. I have been using SBCL since before SB!EVAL was added, and I have never used the interpreter.

> ECL has three ways to run code: an interpreter, a byte code engine and a Lisp-to-C compiler

The confusing thing about ECL is that someone decided to call their bytecode VM an "interpreter." ECL has no interpreter - eval_nontrivial_form just calls the bytecode compiler.

All that is irrelevant to the fact that ECL uses GCC as a JIT.

> In theory one could use a JIT to compile the byte codes to native code at runtime.

Which would be completely pointless because GCC (or clang or MVCC, but the original question was about libgccjit) is already there and does better optimization.

> Some experiments of improving CLOS speed at runtime (similar to what Julia does) might be considered to be a moral equivalent of a JIT

Like half of the PCL is devoted to various kinds of caching that are in fact adaptive optimization. It is not a big step from that to inline caching. But again, why would you do that when you don't need to?

https://github.com/guicho271828/inlined-generic-function https://github.com/marcoheisig/fast-generic-functions

There is a reason Mozilla went from tracing (TraceMonkey) to inline caching (JägerMonkey) to type specialization (IonMonkey). JavaScript JITs have moved closer to Lisp compilers, not vice-versa.

I have covered other misconceptions that you have listed elsewhere in this thread.


> and I have never used the interpreter

There would only be a reason to use it, if it were providing added functionality. In SBCL I don't use the interpreter, but in other implementations I might use it, for example when a source stepper uses an interpreter.

> ECL has no interpreter

I missed that - actually ECL had an interpreter, but it was removed long time ago.

> Which would be completely pointless because GCC (or clang or MVCC, but the original question was about libgccjit) is already there and does better optimization.

That's not clear. If one would do runtime JIT compilation with runtime statistical information, it could very well a speed improvement. There are a bunch of things which could (!) be useful:

Typically functions in Lisp have only one compiled version, depending on a fixed amount of declarations and optimization settings. A JIT could compile several optimized versions for different argument types and choose the best fit one at compile time.

A JIT could do inlining code based on statistical information and select at runtime optimized functions with additional inlining.

A JIT could remove runtime dispatch for often used code parts and provide versions for that.

> PCL is devoted to various kinds of caching

And a bunch of implementations were trying to make sure that no compilation takes place during CLOS calls -> thus making sure that a Lisp compiler at runtime is not necessary.

> There is a reason Mozilla went from tracing (TraceMonkey) to inline caching (JägerMonkey) to type specialization (IonMonkey). JavaScript JITs have moved closer to Lisp compilers, not vice-versa.

Lisp compilers are usually AOT native code generating compilers, like ECL's Lisp->C compiler.

> I have covered other misconceptions that you have listed elsewhere in this thread.

Like you claim that ECL's version of the Common Lisp COMPILE is a JIT. Then basically most COMPILE implementations in Common Lisp implementations would be a JIT. Is there anything that ECL's COMPILE function makes it special?

Otherwise Lisp I of 1960 had a JIT. The function comdef (page 53 in the Lisp I Programmer's Manual) would compile named functions at runtime to machine code.


> If one would do runtime JIT compilation with runtime statistical information, it could very well a speed improvement.

Or you could just be wasting cycles on profiling overhead and recompiling things for no reason.

> A JIT could compile several optimized versions for different argument types and choose the best fit one at compile time.

That's... a compile time optimization.

> A JIT could do inlining code based on statistical information and select at runtime optimized functions with additional inlining.

Tracing is very close to self-modifying code. What happens when you redefine a function? What happens when you redefine a function at a breakpoint? There is a lot of runtime overhead (profiling, bookkeeping to back out of inlining) and it is very hard to debug. And it is never going to be as fast as static inlining and dispatching (sealing) at compile time.

> A JIT could remove runtime dispatch for often used code parts and provide versions for that.

That's what dispatch caching already does.

I think you and other people in this thread bought into the marketing hype for 10 year old tracing JavaScript implementations and now think that "JIT" is some kind of magic runtime optimization. The optimizations are all orthogonal to JIT (in fact GNU Lightning does none of the ones you alluded to). Adding tracing to any of the existing Common Lisp implementations would be a huge waste of time when there are easier and more effective things that can be done (like say converting an existing implementation to use SSA).

> Then basically most COMPILE implementations in Common Lisp implementations would be a JIT.

Yes.

> Is there anything that ECL's COMPILE function makes it special?

ECL uses GCC. If you read the original question, this was asking about libgccjit.


GNU Guile (known primarily for its Scheme implementation) uses a modified version Lightning under the hood.


There was a similar JIT library written in C mentioned on HN awhile ago. Wasnt libjit or lightning. I forget what it was called. Seemed a lot simpler.


I'm not sure how one could get much simpler than libjit. The first time I read the documentation I had a working jit for a toy language five minutes later, and less than a week after that I was jit-compiling Ruby code (not the full language, obviously, just a subset). It really is a pleasure to use, since the API is based on normal C functions instead of macros.


Simpler doesn't imoly more featured; as I recall that simpler approach had more rough edges. Still haven't found it.


https://www.gnu.org/software/lightning/

>"GNU lightning is a library that generates assembly language code at run-time; it is very fast, making it ideal for Just-In-Time compilers, and it

abstracts over the target CPU

, as it exposes to the clients a standardized RISC instruction set inspired by the MIPS and SPARC chips."

[...]

>"GNU lightning is usable in complex code generation tasks. The available backends cover the aarch64, alpha, arm, hppa, ia64, mips, powerpc, risc-v, s390, sparc and x86 architectures."

https://www.gnu.org/software/lightning/manual/lightning.html

>"To be portable,

GNU lightning abstracts over current architectures’ quirks and unorthogonalities.

The interface that it exposes to is that of a standardized RISC architecture loosely based on the SPARC and MIPS chips. There are a few general-purpose registers (six, not including those used to receive and pass parameters between subroutines), and arithmetic operations involve three operands—either three registers or two registers and an arbitrarily sized immediate value.

On one hand, this architecture is general enough that it is possible to generate pretty efficient code even on CISC architectures such as the Intel x86 or the Motorola 68k families.

On the other hand, it matches real architectures closely enough that, most of the time, the compiler’s constant folding pass ends up generating code which assembles machine instructions without further tests."

[...]

>"3 GNU lightning’s instruction set

GNU lightning’s instruction set was designed by deriving instructions that closely match those of most existing RISC architectures, or that can be easily syntesized if absent. Each instruction is composed of:

an operation, like sub or mul most times, a register/immediate flag (r or i) an unsigned modifier (u), a type identifier or two, when applicable. Examples of legal mnemonics are addr (integer add, with three register operands) and muli (integer multiply, with two register operands and an immediate operand). Each instruction takes two or three operands; in most cases, one of them can be an immediate value instead of a register.

Most GNU lightning integer operations are signed wordsize operations, with the exception of operations that convert types, or load or store values to/from memory. When applicable, the types and C types are as follow:

     _c         signed char
     _uc        unsigned char
     _s         short
     _us        unsigned short
     _i         int
     _ui        unsigned int
     _l         long
     _f         float
     _d         double
Most integer operations do not need a type modifier, and when loading or storing values to memory there is an alias to the proper operation using wordsize operands, that is, if ommited, the type is int on 32-bit architectures and long on 64-bit architectures. Note that lightning also expects sizeof(void*) to match the wordsize.

When an unsigned operation result differs from the equivalent signed operation, there is a the _u modifier.

There are at least seven integer registers, of which six are general-purpose, while the last is used to contain the frame pointer (FP). The frame pointer can be used to allocate and access local variables on the stack, using the allocai or allocar instruction.

Of the general-purpose registers, at least three are guaranteed to be preserved across function calls (V0, V1 and V2) and at least three are not (R0, R1 and R2).

Six registers are not very much, but this restriction was forced by the need to target CISC architectures which, like the x86

[PDS: I think despite this trade-off, that this was a good engineering decision(!), otherwise x86 could not be targeted, and that would destroy a huge swath of functionality!]

, are poor of registers; anyway, backends can specify the actual number of available registers with the calls JIT_R_NUM (for caller-save registers) and JIT_V_NUM (for callee-save registers).

There are at least six floating-point registers, named F0 to F5. These are usually caller-save and are separate from the integer registers on the supported architectures; on Intel architectures, in 32 bit mode if SSE2 is not available or use of X87 is forced, the register stack is mapped to a flat register file. As for the integer registers, the macro JIT_F_NUM yields the number of floating-point registers.

The complete instruction set follows; as you can see, most non-memory operations only take integers (either signed or unsigned) as operands;

this was done in order to reduce the instruction set

[PDS: Opinion: Any way that the instruction set can be simplified is a good thing!]

, and because most architectures only provide word and long word operations on registers. There are instructions that allow operands to be extended to fit a larger data type, both in a signed and in an unsigned way."

[...]

>"8 Acknowledgements

As far as I know, the first general-purpose portable dynamic code generator is DCG, by Dawson R. Engler and T. A. Proebsting. Further work by Dawson R. Engler resulted in the VCODE system; unlike DCG, VCODE used no intermediate representation and directly inspired GNU lightning.

Thanks go to Ian Piumarta, who kindly accepted to release his own program CCG under the GNU General Public License, thereby allowing GNU lightning to use the run-time assemblers he had wrote for CCG. CCG provides a way of dynamically assemble programs written in the underlying architecture’s assembly language. So it is not portable, yet very interesting.

I also thank Steve Byrne for writing GNU Smalltalk, since GNU lightning was first developed as a tool to be used in GNU Smalltalk’s dynamic translator from bytecodes to native code."

[...]

PDS: Overall, looks like a great idea!

Also, I think that future compiler and VM writers and FPGA soft-CPU authors -- should target this "abstracted instruction set"!

Keywords: Instruction Set Architecture, ISA, x86, RISC, RISC-V, Abstraction, Abstract Instruction Set, Instruction Set Subset, Generic Compatibility Layer


> I think that future compiler and VM writers and FPGA soft-CPU authors -- should target this "abstracted instruction set"!

GNU lightning succeeds in what it sets out to do, which is to offer a simple and minimal JIT code-generator. It offers nothing in the way of optimisation, by design. Most projects looking for a code-generator are looking for something with great optimisation built-in, so they're not wrong to go with LLVM or the JVM rather than GNU lightning (or something similar like Mir [0][1]). I don't think the average compiler would gain much by targeting GNU lightning.

With all that said, GNU Guile, a Scheme interpreter, uses a fork of GNU lightning, insufferably named lightening. [3]

[0] https://github.com/vnmakarov/mir

[1] https://lists.gnu.org/archive/html/lightning/2020-02/msg0001...

[2] https://wingolog.org/archives/2019/05/24/lightening-run-time...


I'm not sure how well GNU lightning has aged, but considering that the above text was written by a pretty clueless 20 year old guy (me), that's a great compliment. Thanks. :)


I believe Guile and GNU Smalltalk used to run on lightning, do they still do?

What other projects use GNU lightning these days?


Guile rund om a modified version of lightning 1 (called lightening for maximum confusion) due to the recent version being more heavy weight.

It is a part of the recent JITted 3.0 release, which brought a lot better performance to the already completely OK 2.2 branch.


GNU CLISP had GNU Lightning added as an option for compiling its VM bytecode in 2008. IMO a bad decision - more maintenance burden; code can now have subtle different bugs whether run in the CLISP interpreter, CLISP bytecode VM, or CLISP bytecode compiled with GNU Lightning; generated code is much larger than bytecode (small memory use was the one unique thing CLISP has over other Common Lisp implementations); performance is still low compared to other Common Lisp compilers.

Today libgccjit seems like a much better alternative for this kind of application than GNU Lightning.


Documentation was last updated in September, 2019


Stable software doesn't require modification.

If a tool is intended to do one thing well, then it won't be constantly getting features.

Or to put it another way, users have just been able to use it for almost two years without time spent upgrading to the latest release.

Quicksort is still a useful algorithm after nearly sixty years. Merge sort after almost eighty.


There's an algorithm and then there's it's implementation. So much more goes into implementation. That said, September 2019 is much much better than a lot of software I use.


I like the GNU lightning project, but I wouldn't say it's so stable it doesn't need updating. Disassembly doesn't work properly on Ubuntu, which strikes me as a pretty serious problem, but which unfortunately they seem to be in no hurry to fix. [0]

I think they'd do well to adopt a proper bug-tracker. Mailing lists are a good way to lose track of known issues.

[0] https://lists.gnu.org/archive/html/lightning/2020-06/msg0000...


If it doesn’t work on Ubuntu that seems like a fault with Ubuntu. That’s where the breaking change is.


If it doesn’t break in Debian, then it’s Ubuntu’s fault.


From what I recall it doesn't work in Debian either.


Then it wouldn't be Ubuntu's fault.


I don't think the fault is with Ubuntu. configure scripts are responsible for coping with variations between different distros. lightning's configure script fails to accommodate the way Ubuntu installs the binutils-dev package. It works ok with (Red Hat based) Amazon Linux 2 on x86-64. It failed on Amazon Linux 2 on AArch64 though, with a seemingly unrelated issue, which was the second bug I reported.

I don't have the specifics to hand. I also don't know autotools, so I'm not well positioned to write a fix.


Technically, the configure script is responsible in a non-ethical/moral sense.

In terms of *nix spirit, it sounds like Ubuntu is not being generous in what it accepts.


> Technically, the configure script is responsible in a non-ethical/moral sense.

I don't follow. Part of the role of a configure script is to accommodate the variations between distributions. If a configure script fails to work with a major distro, that generally means the configure script needs fixing. I don't see that ethics have anything to do with it.

> In terms of *nix spirit, it sounds like Ubuntu is not being generous in what it accepts.

Ubuntu likely isn't actively doing anything in the process here, it probably isn't in a position to accept or reject anything. Ubuntu presumably places certain files in places the configure script fails to check. This kind of variation between distros is annoying, but typical, and again it's part of the reason we use configure scripts in the first place. (Or modern alternatives like CMake.)

If the issue turns out to be that Ubuntu has an issue with pkg-config (a package-discovery system with distro integration) failing to find binutils, then yes, the onus would be on Ubuntu to fix it. I don't think that's the case here though.


> Quicksort is still a useful algorithm after nearly sixty years. Merge sort after almost eighty.

No one uses Quicksort as Hoare originally proposed (modern variants include 3-way quicksort and pattern-defeating quicksort), as much as merge sort as von Neumann originally proposed (modern variants include adaptive merge sort and Timsort). Both were tremendously updated to keep up with the ever-changing computer architecture and requirements.


Sure, I agree. However, it is also a sign that no new features nor capabilities have been added.

> Quicksort is still a useful algorithm after nearly sixty years. Merge sort after almost eighty.

I am not sure specific algorithms really compare to a whole suit of algorithms and libraries (i.e., a tool) composing very specific versions of a software like GNU lightning, but sure, both Quicksort and Mergesort are useful since their inception. Try installing Windows 3.1 (don't use VMs please) on your current computer and see if we can really live without updating things..

But I'd argue that if we follow your idea, we can say that the democratic concept is also useful since the 18th century. And still, it has to adapt throughout time and be somehow updated (e.g. allowing women and black people to vote or freely use public spaces like toilets and elevators).


Love this comment.


Too bad it's GPL, is there something like this that's MIT, BSD, or Apache licensed?


If you want to benefit from free software you can share your own software by licensing it GPL.


The MIT license predates the GPL.

Real freedom has been there the whole time.


The MIT license encourages freeriding and impoverishes the free-software community, which reduces overall freedom.


Weird that you'd say that - given that our license predates the GNU license and the fact that our community is way larger than the GNU one.

It's impoverishing the free-software community by... existing for longer and having WAY more people involved in making free-software?


I don't think that argument works. Stealing, for example, is old and common but still impoverishes the community.

Way more people are involved in making closed-source software. MIT license encourages more closed source software, while GPL encourages more open source software. GPL recognizes the conflict and sides with people who contribute.


It's rich calling people using MIT stealing when it's clearly stated in the license terms.

It seems like the GPL people are a vocal minority who want to control what everyone else does with software, while patting themselves on the back about being "true contributors" while...generating much less actual software in practice.

Calling another community "impoverished" which is far larger and produces way more software is also rich.


I think I didn't explain my point well. I don't call people using MIT stealing, it was just an analogy, maybe an unclear one.

Anyway.

> Calling another community "impoverished" which is far larger and produces way more software is also rich.

I don't consider MITL another community from GPL. We all live in the world together.

People who license their software MIT are of course contributing to the open-source community -- in large quantities, as you point out. And that software is useful. The point I'm making is that closed-source software doesn't contribute to the open-source community, and that GPL, more than MIT, encourages open-source over closed source.

> who want to control what everyone else does with software

There is a tradeoff: by exerting a certain amount of control in the "I'll share if you'll share" space, we can get all of the other freedoms in exchange. There are other ways to handle that balance, such as allowing anybody to use stuff without giving back.

Any way we handle it, we'll have to give up some rights in exchange for something. With MIT the tradeoff is implicitly made one way, favoring stronger first-order freedoms via a weaker social contract. With GPL the tradeoff is explicitly written into the license, favoring stronger higher-order freedoms via a stronger social contract.


It’s LGPL. You are free to prevent your users from enjoying the same freedoms you do.



Awesome I'll check it out, thank you.


Has had 6 commits in the last two years. From the same person. 2.1.0 was released 6 years ago, the latest 2.1.3 release came out two years ago. Releases are available from some random FTP server.

On one hand this could be software that is completely done and requires no more maintenance. On the other hand it could be another random half-baked GNU project with a single champion who has lost interest but is still saddled with a purist/terrible new contributor onboarding experience dooms it to an inevitable grave.

You can be the judge of that.


"Random FTP server"? You mean the GNU FTP server?


You speak of that like it’s some bastion of software delivery. You mean THE gnu ftp server??

It’s a random FTP server. Could be sourceforge for it all it matters. Doesn’t even support SFTP.


Any server is a random FTP server, what's your point? You can't trust any of them, only those who run it to a certain degree.

The GNU project has been around since 84 and has always tried to maintain solid open source software vetted by real people.

What do you trust instead?


If you look closely you'll see that ftp.gnu.org is an HTTP server. But it seems that you just want to troll.


And if you look closer you’ll see that it’s a FTP server with a HTTP interface (http, not https, by default mind you).

ftp://ftp.gnu.org/


The link is http, https also works.

Even if it is only http, so what? PGP signatures exists too.


And if you really squint hard enough you can blur out all your comments and not have to read them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: