> Every lisp hacker I ever met, myself included, thought that all those brackets in Lisp were off-putting and weird.
Not me. I used C before Lisp, and Pascal before that. C replaced syntax like BEGIN and END and "END <procedure name>" with just { }. I thought this was a most excellent thing.
I was still not using Lisp when HTML and then XML came along. My reaction to XML was this: since most of the content of XML is payload data, why don't we just fscking use parentheses or braces to structure it? Some {foo bar {baz}} or whatever instead of this moronic, bandwidth-wasting, unreadable nonsense <foo>...</foo>.
I could understand it when the markup is just a few raisins in the pudding like:
> Lorem ipsum dolor sit amet, <i>consectetur</i> adipiscing elit, sed do eiusmod
but this XML thing made no sense as a data notation in which every little leaf element is wrapped in verbiage.
So then when I got into Lisp, it basically had the syntax I already wanted; it was "pre-approved".
I already liked parentheses because they occur in English prose (like this), in mathematics, and in almost every programming language I had ever used.
The Unix shell and other command languages eliminate the commas: we don't write
tar, czvf, foo.tar.gz, foo
Moreover, command arguments are not restricted to narrow lexical categories like "must begin with letter or underscore ..."; they are just clumps of non-whitespace characters. If you know any command languages, what Lisp is doing in that regard is obvious; you're not confused by a+b just being an argument, different from a + b.
Commands with the main function on the left followed by space-separated arguments occur in parentheses in POSIX command substitution syntax:
In general, by the time I got into Lisp I had written so many scanners and parsers, solved so many shift-reduce and reduce-reduce conflicts and whatnot, I knew a good thing when I saw it.
YAML sort of addresses this, but I really wish companies would just go with Lisp when they want to do weird DSL's, such as, ahem, Ansible with their weird YAML. As weird as Lisp is, it can never be as weird as learning idiosyncratic languages for different products. It is the most minimalist machine-human compromise for lists of lists.
Yes, let’s dump YAML. Yes, let’s dump arbitrary DSLs for declarative data structures. Yes, let’s adopt a lisp’s syntax. Just... let’s adopt the one (EDN) with more than one kind of bracket, some kind of traction, a surprisingly good mapping to static types, and a surprisingly robust compatibility with existing tooling (Transit).
It’s true there are some rough edges here, notably mostly in the actual Clojure(Script) implementations (unfortunately unsurprising) rather than in EDN. Also worth emphasizing that Transit addresses many of the issues highlighted (as well as EDN’s biggest downside outside the Clojure ecosystem: it’s slow).
There are places where it makes sense. I don't love Kubernetes' YAML, but it makes sense as a serialization format. I think LISP would be out of place here firstly because your serialization format really shouldn't be executable, and secondly because it offers few advantages. You can implement any DSL you want and just have it output YAML.
Ansible really should have ditched the YAML format a long time ago. The YAML is practically designed to be executable; I'm almost surprised that you can't add a shebang line pointing to Ansible at the top of your YAML. I'm actually a little surprised they've never offered an option to write parts of your Ansible setup in Python.
I would love to make some of my roles and my playbook into Python code. I am forever googling the syntax to set up a loop in Ansible YAML, as well as how to make dependencies between roles, which is pretty cleanly solved with Python modules (just import and execute the role at the top of your new role).
It works pretty nicely, just remember to make your playbook file executable. It's a bit cumbersome though since you cannot use `ansible-playbook` arguments like `--limit`, `--diff`, `--check` and so on to have better control over the playbook execution.
Doesn’t look weird to me, that’s just an empty lambda. Pretty recognizable for anybody who works with lambdas, IMO - it’s just an empty capture with no parameters and no body. Not something you would ever see in real code unless doing something very weird.
Lisp doesn't really have (much) syntax. The programmer is directly creating a collection of syntax trees (more or less) -- i.e., what most compilers generate after their syntax analysis pass. In fact, you can use Lisp syntax for abstract syntax trees (AST's) as the the output of a parser.
Of course it is this very fact that makes code and data interchangeable in Lisp and allows for constructs that other programming languages with syntax can't mimic (e.g., powerful macro features).
What I like about Lisp's syntax is that the cursor (point) is always in a complete Lisp program. You move up one set of parenthesis, again a complete program. Move up further .... till you reach the top (file-level).
So this gives good opportunity to Editor makers. Compile "this" level, 1-level-up, 2-level-up and so on. In an editor you can always check your program while coding.
Example is Cider package for Emacs for Clojure programming language.
It is for this reason that it is misleading to say that lisp languages "have no syntax" - they do have considerable constraints on their structure. It's almost as if there's a third layer between "syntax" and "semantics" - or perhaps the word "syntax" conceals two distinct types of structure.
The XML people came up with the word "schema" for the remaining shape of a datum when the details of recovering the tree itself from the tokens are settled.
Understanding what lisp-ers mean by 'repl-driven development' -- and why a python or node repl isn't it -- is really hard for people who don't get this, and haven't used it in practice.
It's not even only about the syntax. The Python environment, as far as I understand (and please correct me if I'm wrong), does not support re-definitions very well. Some things can be re-defined, but some cannot. So that's a problem in itself which makes REPL-driven development not very useful in Python.
Not only can you redefine classes, you can change the class an object belongs to. Just point `obj.__class__` at something else, or change `cls.__bases__` to reorganize the class's inheritance hierarchy.
I used to really like Lisp and did some professional programming using both Racket and CL. But now I don't really like it anymore and parenthesis do put me off.
The benefit of easy AST manipulation just doesn't seem worth it when the cost is a very heavy and verbose syntax. Everything just takes a little longer to write in Lisp, and the syntax is far too noisy.
Moreover, languages with powerful and easy to use macros (like Julia) don't have a paren heavy syntax, so what value does Lisp syntax really even provide anymore? New languages have demonstrated you don't have to sacrifice a lightweight syntax and expressive macros.
It also makes writing any sort of code that uses equations incredibly burdensome to write.
In short, while Lisp syntax used to serve a purpose (facilitation of macros/modification of AST), I feel like it no longer really does.
For comparison, I think OCaml syntax is really nice: light and readable.
I love Lisp syntax because I am stupid. It is actually easier to remember. Remembering the function names and library calls is difficult for me because of my disability. But because there is no syntax it is so easy to parse in your head.
I maintain that the problem with Lisp for most programmers coming from C-like languages is not the parentheses.
It's the fact that lisp style is heavily oriented around expressions (since almost everything is an expression) which leads to a lot of nesting and a functional style, rather than a block-oriented imperative style.
I can't speak to Rust, but I suspect in the case of Haskell the criticism doesn't arise because the whole language is so foreign to people coming from C-like languages that everything being an expression is just one of many hurdles.
So it doesn't stand out as much on its own, and either you give up or by the time you make it through enough of the other hurdles you probably appreciate the expression-orientedness.
I say this as a Haskell enthusiast.
Do-notation probably also masks the fact that everything is an expression from newcomers a bit in the beginning.
Is this really about syntax? I think this is about homoiconicity. But homoiconicity is NOT the same thing as parenthesis, (or brackets as here)! Homoiconicity is not a (concrete) syntax issue, it's an AST and semantics issue in my view.
Why are we so hung up on the superficial details of the syntax. Can't we just have an AST spec with bijective mappings between different human presentations? Editors already do color coding for us and people use different fonts for their code, but flamewars about what colors and fonts are quite rare. I don't see this as anything fundamentally different.
Is it somehow difficult to accept that different syntax may map to identical AST? Lisp is intentionally syntactically very close to its AST and I find this elegant. But why are we so hung up that the human interface for this has to be parens? Are people confused somehow that wanting to use different syntax means they want to change the language? Even if it no way forces people not to use parens if they want. Can't "the language" just be the AST?
Am I overlooking something? I find this to be an almost trivial solution to many many neverending flamewars. I personally don't like parens because I want computer to do what it does best, which is doing routine churn like matching up parens. I don't mind if somebody else wants to see the parens. Why should I mind? The computer doesn't care at all. Multiple people could edit the very same code, one with parens, one with brackets and one without either (e.g. significant indentation).
The EMACS solution is paredit or something where the editor tracks the parens. But why do these have to be there at all if the computer already knows how to track them?
For me this is almost identical issue to C-family semicolons, tabs-vs-spaces indentation, formatting guidelines, the Guido colon, significant whitespace etc. Is this some authority or status or tribal thing or something? Or cultural lag from moveable type era?
I don't get it. Why people find it so important how other people use want to see the superficial syntax? It's like having strong opinions how other people should paint the interiors of their house to me. Or is there some mix-up between form and function and we have different understandings where the separation goes?
BTW, I recall seeing this post before, but it's not dated. As per wayback machine this has been published in 2020, and seemingly just has "repost=true" GET-variable for "dupe-busting(?)". Shouldn't this be marked as (2020) in the title?
I do metaprogramming in quite a few languages -- Python is the worst, and lisp is the best (and I've got literally thousands of times more experience with python). To me, homoiconicity is huge -- especially the comma and backtick operators make it a breeze. If you don't metaprogram, maybe you don't care -- but if you care at all about performance, a tiny collection of features it enables a whole new programming paradigm.
Contrast this to Python. The best feature is f-strings, which approaches the ease of backtick/comma. But the lack of homoiconicity means you can't just jam a bunch of statements in a list -- for crap's sake, indentation makes everything a pain.
I fail to see the connection to syntax-vs-homoiconicity here. Python is definitely not the worst, e.g. in C you have maybe some preprocessor hacks. Python metaprogramming is also usually on "dynamic level" and a bit difficult to compare to macros as in e.g. Lisp.
I fail to see the connection with f-strings to lack of homoiconicity, let alone indentation. I'm also a bit amazed why people see indentation as painful, don't you indent your code if the indentation is not significant?
In vim I tend to use the "dumb-smart-indentation" that just keeps the same indentation for any new line. Seems to work fine for my purposes, just tab for new level and a backspace after to get back to the previous level. Although vim trying to be too smart about this nowadays, and tends to try to often just screw this up with some constantly braking magic.
You make a good point. We ought to be working in AST-land rather than spending time arguing about delimiters. It's legacy and it's a classic example of the bikeshed effect.
That being said, it's still an open problem to develop an ergonomic AST editor... I do want to see it.
Why does AST need a specific editor? Just pick what you want. If you like parens, just use parens. I'd prefer to just specify the tree with significant indentation. These would be just files.
I want to work at the graph level, not the text level. My belief is that we could avoid a lot of artificial complexity and yak shaving around parsers, name resolution, and particularly collaboration/version control if we worked at this higher level.
The point jangid made above about being able to specify part of the program with only one character makes a significant difference:
What I like about Lisp's syntax is that the cursor (point) is always in a complete Lisp program. You move up one set of parenthesis, again a complete program. Move up further .... till you reach the top (file-level).
So this gives good opportunity to Editor makers. Compile "this" level, 1-level-up, 2-level-up and so on. In an editor you can always check your program while coding.
You get the benefits of writing code composed of neat delineated expressions that nest arbitrarily but without the verbosity that parentheses add to the source file.
Though personally I don't particularly find (+ 1 2 3 4 5) less readable than 1+2+3+4+5, and since most of my programs don't have math expressions much more complicated than that, even without cmu-infix or alternatives (Maxima is great when you need to do real math, and for other related things I might as well link https://github.com/CodyReichert/awesome-cl#numerical-and-sci...) I'd find the rest of the tradeoffs worth it, much like once I thought despite Python not having i++ or ++i it was still worthwhile. (In Lisp, by the way, one would use (incf i).)
You don’t give up operator precedence, you just write it explicitly. Operator precedence is a consequence of implicit infix operator behavior.
What is the value of this statement?
3 + 100 % 2 / 5
I’d like to have added a power calculation somewhere in there for illustration, but most general-purpose languages make that a function. It’s worth asking whether they ran out of infix symbols/syntax or chose a function for some other reason.
Probably the former. Python seems fine with infix **. But Python doesn't have pointers: C++ designers would have to invent something less conventional (not that they aren't famous for unconventional horrors, though).
False. All you really need is a bijection: McCarthy himself suggested f[x;y]<=>(f x y) and it seems reasonable (and it turns out to be actually useful!) to declare a domain of f where you permit xfy<=>(f x y)
"readability" is usually given to mean "most people believe they can read it" and not something useful like "most people understand it fully", and nearly any reduction in typing that operator-precedence can offer can be obtained with a simpler rule (like right-of-left) and ordering.
That is to say these things (in their usual meaning) have net-negative value to programs and programmers, are a frequent root-cause of bugs.
> There's a reason people like syntax, because it is expressive.
There are also many people who like the imperial system more than the metric system.
Given that new languages with such syntax continue to be developed and that many express that they favor it, it is at best a personal taste, and at worst simply inertia.
“conventional mathematical notation” was never designed; much like the imperial system, it organically grew and I find it somewhat arbitrary what operations receive an infix operation and what do not and it even depends on the language in some cases.
That being said, I really do not favor `string-append` where `strapp` suffice. I especially do not favor `call-with-current-continuation` over `call/cc`.
It seems like the infix issue would be a pretty simple modification to eval: if the first thing isn’t callable, try the second thing. But I’ve never seen this done, so it must be more complicated than I realize.
The problem is that the first thing might be callable, and it might expect one or more arguments that could include functions. If your intention was to add something to the result of a function, but + was passed as input to that function, you wouldn't get what you expected.
Sure, if the first thing is callable, you always go with the default eval strategy, like in this case:
(map - (quote (1 2 3)))
I'm only talking about cases where the first thing isn't callable, but the second thing is:
(3 + 1)
It seems like this couldn't ever be a problem, because the only cases in which the infix strategy would be used are cases which would have been invalid programs anyway. Does that make sense?
The list syntax isn't really a grammar but a data structure spec given to the compiler. The only power here is in using the same data structure. It's very powerful, but that's the basis of it.
I know different pedagogical approaches work for different people, but in my experience the fastest and easiest way to learn Lisp is to bite the bullet and jump right into it. I think when it comes to the parentheses of Lisp, the premise is straightforward; there just isn't a whole lot to actually learn. I believe most of the difficulty comes from people psyching themselves out before they even try. That describes my personal experience with the matter anyway.
Also with regards to prefix notation being unintuitive: We already teach something very similar to schoolchildren learning arithmetic:
1
+ 2
---
Here, as with (+ 1 2), the operator is on the left most side. The operands are arranged horizontally in lisp instead of vertically, but the supposed weirdness of the operator being on the left doesn't seem to bother people when it comes to arithmetic.
I always read that as (1 + 2). I read it left to right, then top to bottom, so it would read as [null, "1", "+", "2"] and not as [null, "+", "1", "2"].
Tbh I generally vocalize (+ 1 2) as "one plus two". Casual English usually uses infix notation, but I think "add one and two" is valid and generally understood.
What about (sqrt (+ (+ 3 3) (+ 4 4))) read as "square root of the sum of the product of 3 by 3 and 4 by 4" instead of √(3^2+4^2) where √ is prefix, + is infix, ^ is exponent, not to speak of implicits precedences.
in lambdatalk (http://lambdaway.free.fr) one could go beyond and mix html/css using the same syntax, for instance
{div {@ style="color:red"} the hypotenuse of a square triangle (3,4) is equal to {sqrt {+ {* 3 3} {* 4 4}}} }
which can be read like this « write in a div html element, whose style attribute is color red, the hypotenuse of a square triangle (3,4) is equal to the square root of the sum of the product of 3 by 3 and 4 by 4 »
is displayed as « the hypotenuse of a square triangle (3,4) is equal to 5 »
In fact prefixed parenthesis expressions follow the way we think and speak.
Apl can be read as english as well, and frequently also obviates nesting (lisp revels in it, but humans don't deal well with deeply nested structures). Your example: 0.5*⍨+/×⍨3 4. That is, the 0.5 power of the sum of the squares of the legs. But notice: uniform precedence (like lisp) and no nesting whatsoever.
(√ as sqrt is not generally primitive, though it can be trivially implemented.)
I almost wonder if Lisp pedagogy would be improved if the ‘+’, ‘-‘, etc. operators weren’t introduced for some time, and maybe instead ‘add’ or ‘subtract’ were taught instead. There’d be less knee-jerk opposition to the lack of infix notation.
Then it would become complaint against verbosity by those who only saw first lesson of a lisp book.
For someone who started learning Lisp way way late in my programming journey, I would not have appreciated it had I not went through half dozen other languages before. The sheer simplicity of the foundational concept is liberating.
That simplicity means Lisp actively encojrages exploration (which is why you see so many Lisp dialects). Unfortunately, most people don't really learn anything new unless they were forced into it. Most people, aren't into exploration. They treat language as a short-term tool to get paycheck, and anything making them use more braincells, even to their own long-term benefit, is an annoyance.
To each their own, its just that as good as Lisp is, its MO don't map well with that of most populace.
Agreed. Even more of a reason to not make + into add and - into subtr.
I've thought about how the verbosity can be avoided, but at certain level of complexity, it is better to have long descriptive names, but I'm not experienced enough with big projects to propose a solution.
> The operands are arranged horizontally in lisp instead of vertically, but the supposed weirdness of the operator being on the left doesn't seem to bother people when it comes to arithmetic.
I'm not sure I agree. I would argue that people read your example left to right, top to bottom, which would be "1 + 2". At least for people that read their native language this way. Maybe people who read from right to left would read "1 2 +" and be predisposed to Forth?
My initial problem with parentheses wasn't the stacking of them -- that's inevitable even in elementary arithmetic -- but with the fact that x is not the same as (x), which doesn't happen in math and doesn't get emphasised enough in Lisp intros. Once I got that, everything was easy.
Edit: To answer to the responses to this comment, it's not that it's conceptually hard, it's just that the parenthesis symbol in its usual incarnation does not work that way, which leads to confusion.
Hmm, never thought of ∅ as a φ. (According to Wikipedia, the symbol was introduced in the 1930s by Weil, inspired by the letter Ø in the Danish and Norwegian alphabets.)
When I'm reading or writing code for Java,or C I am reading or writing code. My brain parses all the visible characters and safely ignores the invisible whitespace. Indentation is just better readability.
When I'm reading or writing Python/Yaml, suddenly I'm having to pay attention to what is not visible to my eye as well. Its extra cognitive load and that reduces readability.
Sorry for the late response, but I find this interesting. I've encountered similar reports earlier regarding this question. There may be quite fundamental differences in how people "see" code. I don't pay any attention to the block delimiters, and am easily fooled if the indentation doesn't match the block structure.
Do you ever make the infamous semicolon-after-if-clause bug in C? I do sometimes and it can take ages just to see that semicolon.
No. There are three fundamental ways to encode a syntax tree - prefix, infix, and postfix. Haskell (ML) is prefix, Lisp is infix, and Forth is postfix. (I.e. Forth-like is the opposite of ML-like, not Lisp-like.)
The advantage of infix is that you don't need to know the argument counts for functions/macros to reconstruct the tree, but the disadvantage is extra parens.
I am slowly coming to (subjective) conclusion that postfix is the most natural, because it's the same as the (typical) evaluation order.
> Haskell (ML) is prefix, Lisp is infix, and Forth is postfix.
With respect, no. Lisp is by default prefix (Polish notation) but with some work you can make it behave as infix or postfix.
Haskell is naturally either prefix or infix depending on how you define and call your functions (the use of backticks or parens around the function name in a call changes it from prefix to infix and vice-versa. It's a rather elegant solution to the problem.)
As you stated Forth is postfix (Reverse Polish notation).
You don't have to anything, I said it's subjective. I prefer ML-like to Lisp-like, because I want typechecking anyway (which involves counting arguments) and this means most Lisp parens we can dispose of. And lately, I noticed I use a lot of "$" (apply) in my code, and while it reads left-to-right, it mostly evaluates right-to-left. So why not switch to Forth-like, which reads and (mostly) evaluates left-to-right?
I've built the beginnings of a Forth compiler in Common Lisp just to prove the concept. I find Forth syntax more convenient for some tasks (like emitting a series of HTML constructs to a stream, in precise order) than Lisp syntax and since Common Lisp gives you access to the values stack it's pretty easy to turn it into Forth. Of course you have to give up variadic functions, but that's the nature of Forth.
I haven't tried the reverse (building a Lisp in Forth) but I imagine it would be straightforward. One can build a Lisp from just about anything.
As I read this, Greenspun's tenth rule of programming humorously came to mind: "Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp."
Generalized by replacing "C or Fortran" with a language where the user wants more power over it's syntax and semantics, to witness the author pull it off in just 44 lines of JavaScript was a joy.
This is honestly not worth debate. It's like arguing about having kids (whether pro or con) with people who don't have kids. There is no way to have the experience of lisp without being an experienced lisp hacker.
Not me. I used C before Lisp, and Pascal before that. C replaced syntax like BEGIN and END and "END <procedure name>" with just { }. I thought this was a most excellent thing.
I was still not using Lisp when HTML and then XML came along. My reaction to XML was this: since most of the content of XML is payload data, why don't we just fscking use parentheses or braces to structure it? Some {foo bar {baz}} or whatever instead of this moronic, bandwidth-wasting, unreadable nonsense <foo>...</foo>.
I could understand it when the markup is just a few raisins in the pudding like:
> Lorem ipsum dolor sit amet, <i>consectetur</i> adipiscing elit, sed do eiusmod
but this XML thing made no sense as a data notation in which every little leaf element is wrapped in verbiage.
So then when I got into Lisp, it basically had the syntax I already wanted; it was "pre-approved".
I already liked parentheses because they occur in English prose (like this), in mathematics, and in almost every programming language I had ever used.
The Unix shell and other command languages eliminate the commas: we don't write
Moreover, command arguments are not restricted to narrow lexical categories like "must begin with letter or underscore ..."; they are just clumps of non-whitespace characters. If you know any command languages, what Lisp is doing in that regard is obvious; you're not confused by a+b just being an argument, different from a + b.Commands with the main function on the left followed by space-separated arguments occur in parentheses in POSIX command substitution syntax:
In general, by the time I got into Lisp I had written so many scanners and parsers, solved so many shift-reduce and reduce-reduce conflicts and whatnot, I knew a good thing when I saw it.