• rdri@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 days ago

    If you try what I wrote it will throw a NaN. I was asking about the first part of the proposal.

    • squaresinger@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 day ago

      The NaN isn’t an thrown. It’s just silently put into the result. And in this case it’s completely unintelligible. Why would an operation between two strings result in a number?

      "Hello" - "world" is an obvious programmer mistake. The interpreter knows that this is not something anyone will ever do on purpose, so it should not silently handle it.

      The main problem here is downward coercion. Coercion should only go towards the more permissive type, never towards the more restrictive type.

      Coercing a number to a string makes sense, because each number has a representation as a string, so "hello" + 1 makes intuitive sense.

      Coercing a string to a number makes no sense, because not every string has a representation as a number (in fact, most strings don’t). "hello" - 1 makes no sense at all. So converting a string to a number should be done by an explicit cast or a conversion function. Using - with a string should always result in a thrown error/exception.

      • rdri@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 day ago

        The interpreter knows that this is not something anyone will ever do on purpose, so it should not silently handle it.

        You basically defied the whole NaN thing. I may even agree that it should always throw an error instead, but… Found a good explanation by someone:

        NaN is the number which results from math operations which make no sense

        And the above example fits that.

        "hello" - 1 makes no sense at all.

        Yeah but actually there can be many interpretations of what someone would mean by that. Increase the bytecode of the last symbol, or search for “1” and wipe it from string. The important thing is that it’s not obvious what a person who wrote that wants really, without additional input.

        Anyway, your original suggestion was about discrepancy between + and - functionality. I only pointed out that it’s natural when dealing with various data types.

        Maybe it is one of the reasons why some languages use . instead of + for strings.

        • squaresinger@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          You basically defied the whole NaN thing. I may even agree that it should always throw an error instead, but… Found a good explanation by someone:

          NaN is the number which results from math operations which make no sense

          Well, technically this is the explanation, it really isn’t a good one.

          x + 1 with x not being defined also doesn’t result in a NaN but instead it throws a reference error, even though that undefined variable isn’t a number either. And x = 1;x.toUpperCase(); also doesn’t silently do anything, even though in this case it could totally return "1" by coercing x to a string first. Instead it throws a TypeError.

          It’s really only around number handling where JS gets so weird.

          Yeah but actually there can be many interpretations of what someone would mean by that. Increase the bytecode of the last symbol, or search for “1” and wipe it from string. The important thing is that it’s not obvious what a person who wrote that wants really, without additional input.

          That’s exactly the thing. It’s not obvious what the person wants and a NaN is most likely not what the person wants at either. So what’s the point in defaulting to something they certainly didn’t want instead of making it obvious that the input made no sense?

          A similarly ambiguous situation would be something like x = 2 y. For someone with a mathematical background this clearly looks like x = 2 * y with an implicit multiplication sign. But it’s not in the JS standard to interpret implicit multiplication signs. If you want multiplication, it needs to explicitly use the sign. And thus JS dutifully throws a Syntax Error instead of just guessing what the programmer maybe wanted.

          Anyway, your original suggestion was about discrepancy between + and - functionality. I only pointed out that it’s natural when dealing with various data types.

          My main point here was that if you have mathematical symbols for string operations, all of the acceptable operations using mathematical symbols need to be string operations. Like e.g. "ab" * 2 => "abab", which many languages provide. That’s consistent. I didn’t mean that all of these operators need to be implemented, but if they aren’t they should throw an error (I stated that in my original comment).

          What’s an issue here is that “1” + 1 does a string concatenation, while “1” - 1 converts to int and does a math operation. That’s inconsistent. Because even you want to use that feature, you will stumble over + not performing a math operation like -.

          So it should either be that +/- always to math operations and you have a separate operator (e.g. . or ..) for concatenation, or if you overload + with string operations, all of the operators that don’t throw an exception need to be strictly string-operations-only.