Syntactic sugar is useless

What is the difference between "Syntax" and "Syntactic Sugar"?


background

The Syntactic Sugar Wikipedia page states:

In computer science, syntactic sugar is a syntax in a programming language that is supposed to improve the readability and expressiveness of things. It makes the language "sweeter" to humans: things can be expressed more clearly, succinctly, or in an alternative style that some prefer.

I don't really understand what the difference between Syntactic Sugar and Syntax is.

I appreciate the point that the sugary version can be clearer and more succinct, and maybe get a little steam out of the kettle plate. But I do believe that all of the syntax is essentially aimed at creating an abstraction of what the code is compiling on.

From the same Wikipedia page:

Language processors, including compilers, static analyzers, and the like, often expand sugared constructs into more basic constructs before processing, a process sometimes referred to as "desugaring".

If I refer to "often" as "always" in this statement, if the difference were whether the compiler "desugarizes" the syntax before moving on to a next stage, how could an encoder who does not know the innards do so? the compiler's know (or care) what Sugar'd syntax is or not?

A very related question on this page, "Rigorous Definition of Syntactic Sugar?" has an answer that begins:

I don't think you can have a definition of syntactic sugar as the term is BS and is likely used by people talking about "real programmers" using "real tools" on "real operating systems".

What could mean to me that maybe there isn't much of a difference to the coder using the language? Perhaps the difference is only noticeable to the compiler-writer? Although there may be cases where it is helpful for the programmer to use the language to know what is under the hood of the syntactic sugar. (But perhaps in reality, a discourse on the subject tends to use the term as flame bait?)

The heart of the question

So ... the short version of the question:

  • Is there a real difference between Syntax and Syntactic Sugar?
  • Who is it important to?

Extra food for thought

Bonus on the subject of contradiction:

An example is given on the Wikipedia page:

For example, in the C language, the notation is syntactic sugar for

While another answer to the question linked above talks about the same example:

Now think about it. Think of a C program that uses arrays in some way.

And summarizes that:

The notation facilitates this abstraction. It's not syntactic sugar.

The opposite conclusion for the same example!





Reply:


The main difference is that syntax is a grammar that is defined in a language so that you can expose some functionality. Once you get to this functionality, each other Syntax that lets you do the same thing is considered sugar. Of course, this leads to strange scenarios about which of the two syntaxes is the sugar, especially since it is not always clear which came first.

In practice, syntactic sugar is only used to describe the syntax added to a language for ease of use, e.g. B. to create an infix map for. I would consider the array indexing of C to be syntactic sugar.

This is especially important because elegant designs tend to limit the number of duplicates. Needing (or at least wanting) syntactic sugar is seen by some as a sign of design failure.







Syntax is what a language processor uses to understand what the constructs of a language mean. Constructs that are considered syntactic sugar must also be interpreted by the language processor and are therefore part of a language syntax.

What sets syntactic sugar apart from the rest of a language's syntax is that it would be possible to remove syntactic sugar from the language without affecting the programs that can be written in the language.

To give a more formalistic definition I would say

Syntactic sugars are those parts of the syntax of a language whose effects are defined in relation to other syntax constructs in the language.

This is in no way intended to denigrate syntactic sugar or the languages ​​in which it occurs, since the use of syntactic sugar often leads to programs whose intent is more understandable.







The other answers didn't mention a key concept: abstract syntax ; without them the term "syntactic sugar" makes no sense.

Abstract syntax defines the elements and structure of languages ​​and how phrases of that language can be combined to form larger phrases. The abstract syntax is independent of the concrete syntax. The term "syntactic sugar", as I understand it, refers to concrete syntax.

In general, when designing a language, you should create specific syntax for each term in your abstract syntax so that users can write code in your language using plain text.

Suppose you create a cumbersome concrete syntax for foo . The users are complaining and you are implementing a new concrete syntax in order to same abstract syntax to represent. The result is that your abstract syntax and semantics have not changed, but you now have two concrete syntaxes for the same abstract syntax term.

I think that's what people mean when they say "syntactic sugar" - changes that only affect specific syntax, not abstract syntax or semantics.

And so the difference between "syntactic sugar" and "concrete syntax" is now clear. Me. :)

This interpretation also helps explain what Alan Perlis might have meant when he said, "Syntactic sugar causes cancer of the semicolon": all of the concrete syntactic sugar in the world cannot fix weak abstract syntax, and the effort you put into adding it Expending that sugar is a lot of trouble that you are not dealing with the real problem - the abstract syntax.


I should also note that this is just my opinion; I only believe it because it's the only interpretation I can think of that makes sense to me.







Syntactic sugar is a subset of language syntax. The basic idea is that there is more than one way to say the same thing.

What it difficult makes to say which pieces are syntactic sugar and which pieces are "pure syntax", are statements like "it is difficult to say which form came first" or "it is difficult to know how the author of the language intended it" or "it is" somewhat arbitrary to decide which form is simpler ".

What it easy power to decide which pieces are pure or sugary is the question in the context of a particular compiler or interpreter. The pure syntax is the material that a compiler converts directly into machine code or to which the interpreter reacts directly. The sugar is the substance that is first converted into another syntax substance before these direct things happen. Depending on the implementation, this may or may not be the same as what the author intended, or even what the language specification claims.

In practice this is how the reality of the matter is decided.



Really, your first quote from Wikipedia says it all "... makes things easier to read ...", ".... sweeter for humans to use ....".

In written form, shortened forms such as "not" or "not" can be viewed as syntactic sugar.







Usually syntax sugar is that part of language that can be expressed through an existing part of language (syntax) without loss of generality, but possibly with loss of clarity. Sometimes the compilers have an explicit desugaring step that transforms the AST generated by the source code and uses simple steps to remove nodes that correspond to sugars.

For example, Haskell has syntax sugar for monads with the following rules that are applied recursively

At the moment it doesn't matter what it exactly means - however, you can see that the special syntax on LHS can be converted to something more basic on RHS (namely function applications, lambdas, and 's). These steps will help you keep the best of both worlds:

  1. The syntax on LHS is easier for programmers (syntax sugar) to express the existing ideas more clearly
  2. Since support for RHS constructs is already in the compiler, it doesn't need to be treated as something special outside of parsing and desugaring (with the exception of error reporting).

Similarly, in C you can think of desugaring the override rule (due to operator overloading, etc., this does not apply to C ++):

You can imagine writing all programs without or in C that use this construction today. It would be more difficult for programmers to use, however, so it provided syntax sugar (I think in the 70's it might make things easier for compilers as well). It may be less clear as you can technically add the following perfectly valid rule for rewriting:

Is Syntax Sugar Bad? Not necessarily - it risks being used as a cargo cult without understanding a deeper meaning. For example, the following functions are equivalent in Haskell, yet many beginners would write the first form without understanding that they are overusing syntax sugar:

In addition, the sugar syntax can overcomplicate the language or be too narrow to allow for generalized idiom code. It can also mean that the language isn't powerful enough to do certain things easily - it can be intentional (don't give developers sharp tools or a very specific niche language where adding a more powerful construct would hurt other goals) or by omitting - the latter form gave the syntax sugar its bad name. If the language is powerful enough to use other constructs without adding syntax sugar, it is considered more elegant to use them.


I think the most obvious example would be the "+ =" syntax in C.

and

Do exactly the same thing and compile the exact same set of machine instructions. In the second form, some entered characters are stored. Most of all, however, it becomes clear that you are modifying a value based on its current value.

I wanted to cite the post / prefix operator "++" as a canonical example, but realized that this was more than syntactic sugar. There is no way to use syntax to express the difference between ++ i and i ++ in a single expression.



First, I'll cover some of the other answers with a specific example. The C ++ 11 scope-based for loop (similar to foreach loops in various other languages)

is exactly synonymous with (i.e. a sugared version of)

Although no new abstract syntax or semantics have been added to the language, it has real uses.

In the first version, the intention (visiting each element in a container) is specified explicitly. It also prevents unusual behavior, such as B. changing the container during the run or the further advancement in the loop body or the subtle confusion of the loop states. This avoids possible sources of error and in this way reduces the difficulty of reading and thinking about the code.

For example, a one-character bug in the second version:

gives a one-past-the-end bug and undefined behavior.

The sugared version is useful precisely because it is more restrictive and therefore easier to trust and understand.


Second to the original question:

Is there a real difference between Syntax and Syntactic Sugar?

No, "syntactic sugar" is a (concrete) language syntax that is considered "sugar" because it does not extend the abstract syntax or the core functionality of the language. I like Matt Fenwick's answer.

Who is it important to?

It's just as important to the users of the language as it is any other syntax, and it provides sugar to aid (and in a sense, bless) certain idioms.

Finally, on the bonus question

The [] notation facilitates this abstraction.

That sounds a lot like that definition of syntactic sugar: it supports (and provides the blessings of language writers) the use of pointers as arrays. The form isn't really more restrictive than, the only difference being the clear message of intent (and a slight gain in legibility).




Whatever the original connotation of the term, nowadays it is primarily a derogatory term, almost always phrased as "just" or "just" syntactic sugar. It pretty much only matters to programmers who do things in an illegible manner and want a precise way to justify this to their colleagues. A definition of those who mainly use the term today would, from their point of view, be something like:

Syntax that is redundant with other general syntax to provide a crutch for programmers who do not really understand the language.

Because of this, you get two opposite conclusions for the same syntactic element. Your first example of array notation is to use the original positive meaning of the term, similar to Bart's answer. Your second example is the defense of array notation against the accusation of being syntactic sugar in the derogatory sense. In other words, it is argued that the syntax is a useful abstraction rather than a crutch.






We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from.

By continuing, you consent to our use of cookies and other tracking technologies and affirm you're at least 16 years old or have consent from a parent or guardian.

You can read details in our Cookie policy and Privacy policy.