eigenspace 3 days ago

Nice to see this passed around on Hacker News. I think the whole concept of lenses is super cool and useful, but suffered from the usual Haskellificiation problems of being presented in an unnecessarily convoluted way.

I think Accessors.jl has a quite nice and usable implementation of lenses, it's something I use a lot even in code where I'm working with a lot of mutable data because it's nice to localize and have exact control over what gets mutated and when (and I often find myself storing some pretty complex immutable data in more 'simple' mutable containers)

kazinator 3 days ago

Certain aspects of this me of the modf macro for Common Lisp:

https://github.com/smithzvk/modf

You use place syntax like what is used with incf or setf, denoting part of some complex object. But the modification is made to the corresponding part of a copy of the object, and the entire new object is returned.

binary132 3 days ago

I have to admit I don’t really understand the point of doing this instead of just obj.a = 2 or whatever.

  • laszlokorte 3 days ago

    Say you want to do obj.child.foo[3].bar += 2 but without mutation, but instead all the data is immutable and you need to do a deep copy along the path.

    Lenses are an embedded dsl for doing this via syntax that reads similar to to the mutable variant. Additionally it allows to compose many of such transformations.

  • electroly 3 days ago

    The other replies covered the answer about immutability well, but I have the further question: why isn't this built into languages as syntax sugar, so that OP's suggested line would work with immutable structures?

    As a dilettante at programming language design, I have my own toy language. It uses exclusively immutable data structures (C++ "immer"). I present it to the programmer as simple value semantics. `obj.foo[5].bar.a = 2` works and sets `obj` to a new structure where the path through `foo`, `bar`, to `a` has all been rewritten. Since I put it in as a language feature, users don't have to learn about lenses. Why isn't this technique more common in programming language design? Is it so offensive that the syntax `obj.a = 2` ends up rebinding `obj` itself? The rule in my language is that assignment rebinds the leftmost "base" of a chain of `.` field and `[]` array index accesses on the LHS. I'm ignorant of the theory or practical consideration that might lead a competent designer not to implement it this way.

    • lioeters 3 days ago

      It's an interesting question, why immutability is not built into more languages as the default, so that the most intuitive syntax of assignment produces new values.

      Without having any expertise in the matter, I'd guess that mutability has the advantage of performance and efficient handling of memory.

        obj.foo[5].bar.a = 2
      
      An immutable interpretation of this would involve producing new objects and arrays, moving or copying values.

      Another possible advantage of the mutable default is that you can pass around references to inner values.

      • electroly a day ago

        That's always the case with immutable data structures; this assignment syntax didn't create that problem. If you used lenses to write 2 into "a", and you expected to get back a new "obj", you would still need to produce all those new objects and arrays. That's just immutable data structure stuff. I'm only asking about the assignment syntax here.

    • WorldMaker 2 days ago

      One of the reasons functional languages like to make mutable code look very different from immutable code is to try to make it clear which is which when reading the code, to help avoid mistakes.

      > Is it so offensive that the syntax `obj.a = 2` ends up rebinding `obj` itself?

      That does imply that the `obj` binding itself is mutable, so if you are trying for entirely immutable data structures (by default), you do probably want to avoid that.

      This is why the syntax sugar, in languages that have been exploring syntax sugar for it, starts to look like:

          let newObj = { obj with a = 2 }
      
      You still want to be able to name the new immutable binding.
      • electroly a day ago

        My language doesn't have any other kind; it's not immutable by default, it's immutable only. There are no mutable types or reference semantics, so there's no other kind of type that I need to differentiate. That's my question--why haven't other languages taken this approach? Many newer languages today are full-throated defenses of immutable data structures--why do they still make the mutable structures the easiest, syntactically, to change? Why not the other way around? Julia is fastest with immutable structures--why provide a built-in syntax for complex assignment to mutable types, but then relegate lenses to a library that only FP aficionados will use? We don't want add() and subtract() when we have + and -; why should we live with set() when we have =?

        I must be missing it because it worked out pretty nicely in my toy language. Complex assignments are written in exactly the way that people expect them to be. That's why I think it must be about taste or practical consideration--obviously it's possible to write a language like this. But experienced designers don't, presumably because it's a bad idea, and I don't understand what the badness is. Since my language is a toy, I likely haven't hit the practical considerations.

        Lenses, to me, feel at home in Haskell where the entire language is a game to see how much theory you can implement in the "userspace" of a tight, maximally-orthogonal FP language. But this is Julia, a monstrously large, imperative, Algol-family language with every possible language feature built-in, intended to be a practical language for analysis by people who aren't programming language experts. Julia's compiler already has knowledge of immutable types which it uses for optimization. Seems like they could do better than lenses if they weren't forced to implement it as a library in the language itself.

        • WorldMaker 20 hours ago

          > But experienced designers don't, presumably because it's a bad idea, and I don't understand what the badness is.

          I don't think it's a bad idea, and I don't see anyone else saying that. I did try to give you a couple practical considerations, but I don't think they stop your idea for a toy language from existing or suggest anything you are trying to do is "bad".

          > We don't want add() and subtract() when we have + and -; why should we live with set() when we have =?

          This question might actually be leading you closer to answers regarding your confusion than you think it is.

          One over-simplifying perspective is that imperative languages are the languages that most want operators like + and - and functional languages have most been the languages that want to use add() and subtract() functions. A good functional language wants "everything" to be a function. If you look back at early lisps almost all of them support `(add 2 3)` but not all of them supported `(+ 2 3)`

          (ETA: accidental post splice was here.)

          Then the functional languages picked up currying, where is useful to refer to the function `(add 2)` as the function that adds two to the next argument and even `(add)` as the function that takes the next two arguments and add them together. In a truly functional language designers often do want `add()` and `subtract()` as reusable, curryable functions more than they want `+` and `-`, because as tools, `add()` and `subtract()` work more like the rest of the languages. As for `set()`, Lisps have almost always only ever had `(let variableName …)` type functions. `=` in most classic functional languages almost always meant an equality check. It's very much imperative languages that gifted us `=` as "assignment" or "set" and then out of consequence of that made equality double (or worse, triple) `==`.

          It's only this far after imperative languages have "won" as much as they have, and have proven favored uses for complicated "PEMDAS" parsers that infix operators have become so common. (It's not quite a universal fact, but a lot of functional languages have had much simpler parsers than their imperative language neighbors. Infix operators are a huge complicated thing to parse, if you haven't already noticed in your toy language.)

          You denigrate Haskell off hand, but a thing I appreciate that is relevant to all this is that Haskell was also one of the first languages to try hard to strongly merge the two worlds: it supports infix usage of any function, and the infix operators of the imperative world aren't that special syntactically, they are just infix functions. This is also why you'll see a lot of Haskell documentation refer to it is `(+)` instead of `+`, because `(+)` is the "real name" of the function and `+` is just the infix form. Haskell wants an `add()` and `subtract()`, it calls them `(+)` and `(-)`. It supports currying like `(+) 2` can be a function.

          A functional programming language sort of wants everything to be a function and operators are a special name of a function. Many functional languages, both historic and current do ask "why do we need + and - when we have (add) and (subtract)?" and even "why do we need = when we have (let)?" (Maybe useful to note too the subtly different imperative versus functional language instincts on where the parentheses go when discussing function names. Imperative languages often as a suffix, almost like an afterthought, and functional languages often surrounding to direct attention inside.)

          You suggest several times that lenses are "theory" and "only FP aficionados will use", but lenses are pretty "basic" and boring" from the perspective of "everything is functions". You don't need a lot of theory to understand lenses, even if the goofy sounding name sometimes makes it sound far more complicated than it is.

          Which isn't to say that languages can't do better with syntactic sugar, just that one of the reasons this often isn't handled with syntactic sugar from a functional programming perspective is "why would it need to be? it's very simple". Immutable types have a longer history in functional programming languages, so their view of what is "simple" and what should be "syntactic sugar" is maybe obvious to explain from their very different perspective.

          There is absolutely a lot of space to keep exploring new syntactic sugar and increasingly better ways for functional languages to take the best ideas of imperative languages. (Again, I appreciate the light humor that Haskell has the reputation today of being the language most drowned in FP theory, but also knowing it has been one of the languages that has done and absurd amount for exploring imperative syntax from a functional standpoint, both in the way that infix operators are just infix syntax for functions, and in things like do-notation.)

          Please keep playing with your toy language and imperative/mutable syntax for immutable data structures, I think that's great. I've done similar experiments in my own toy languages. I think the answer to why "big languages" haven't done it yet, isn't because it is a "bad" idea, but because it is a matter of perspective. Most functional languages don't want imperative syntax or don't want functional/immutable things to look like imperative/mutable syntax. Again, not because it is "bad", just because they have very different family trees.

          • electroly 20 hours ago

            Oh, I meant that I suspect it's a bad idea. I've got a gut feeling that I can't put my finger on. I implemented it anyway because my language's design "pointed" in that direction based on prior decisions I had made (re: not having any reference semantics), but I can't shake the feeling that I've created something internally consistent but confusing to people trying to learn it. Or that it will hit a wall at some point and I'll suddenly realize why this is not done. I'll have to think about it some more. Thanks for the discussion!

            • WorldMaker 19 hours ago

              Ah, that makes sense. I went on to a longer length on that, but apparently my browser accidentally posted a partially complete draft.

              Hopefully the remainder post adds additional perspective.

  • ssivark 3 days ago

    The difference doesn't matter when you have a shallow structure and can access fields directly and have a few lines of code. But field access does not compose easily if you have a nested hierarchy of objects. Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set. Say you get bored of the tedium and want "higher-order" accessors that compose well -- because ultimately all look-up operations are fundamentally similar in a sense, and you only need to write traversals once per data structure. Eg: Instead of writing yet another depth-first search implementation with for loops, you could easily tie together a standard DFS implementation (traversal) from a library, with accessors for the fields you care to work with.

    One way to think of the goal of functional paradigm is to allow extreme modularity (reuse) with minimal boilerplate [1]. The belief is minimal boilerplace + maximum reuse (not in ad-hoc ways, but using the strict structure of higher-order patterns) leads to easily maintainable bug-free code -- especially in rapidly evolving codebases -- for the one-time cost of understanding these higher-order abstractions. This is why people keep harping on pieces that "compose well". The emphasis on immutability is merely a means to achieve that goal, and lenses are part of the solution to allow great ergonomics (composability) along with immutability. For the general idea, look at this illustrative blog post [2] which rewrites the same small code block ten times -- making it more modular and terse each time.

    [1] https://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.p...

    [2] https://yannesposito.com/Scratch/en/blog/Haskell-the-Hard-Wa...

    Once the language is expressive enough to compose pieces well and write extremely modular code, the next bit that people get excited about is smart compilers that can: transform this to efficient low-level implementations (eg. by fusing accesses), enforce round-trip consistency between get & set lenses (or complain about flaws), etc.

    • pasteldream 3 days ago

      > Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set.

      Your natural alternative to lenses in imperative languages is usually to just store a reference or pointer to the part you want to modify. Like a lens, but in-place.

      • ForHackernews 2 days ago

        But then you're modifying the thing, not creating a new object with different content. It's different semantics.

        • pasteldream 2 days ago

          Yeah, but I’m saying that in 90% of the cases where a functional program would use lenses, the corresponding imperative program would just use references.

          • ssivark 2 days ago

            Sure, but can you make that imperative program (with pointers and all) as modular/composable? That's the whole point -- lenses are not an end unto themselves; only a tool in service of that goal.

            • pasteldream 2 days ago

              I think we’re talking past each other.

              Lenses serve many purposes. All I’m saying is that in practice, the most common role they fulfil is to act as a counterpart for mutable references in contexts where you want or need immutability.

              Can the use of lenses make a program more “composable”? Maybe, but if you have an example of a program taking advantage of that flexibility I’d like to see it.

              • ssivark 2 days ago

                Do check out the links in my original comment above, which explain that the whole motivation behind all this (of which lenses are just a small part) is modularity. Modularity and composability are two sides of the same coin---being able to construct a complex whole by combining simple parts---depending on whether you view it top-down or bottom-up.

                Suppose you refactor a field `T.e.a.d` to `T.e.b.d`, for whatever reasons. How many places in your codebase will you have to edit, to complete this change?

                Dot access exposes to the outside world the implementation details of where `d` lives, while lenses allow you to abstract that as yet another function application (of a "deep-getter" function) so your code becomes extremely modular and flexible. A good language implementation then hopefully allows you to use this abstraction/indirection without a significant performance penalty.

                • binary132 2 days ago

                  void set_d(T*);

                  • ssivark 2 days ago

                    Yup, that’s basically the idea behind lenses, once you add a few more ergonomic niceties.

                    The Haskell approach is to take any pattern, abstract it out into a library, and reuse instead of ever having to implement that plumbing again I.e. a very generic get/set_foo which could specialize to specific fields/structures. Following that, you could also write a lenses library in Cpp if you don’t want to redo this for every project.

                    The point is not that it can’t be done in non-functional languages, but that it’s an uncommon pattern AFAICT; the common approaches result in much less modular code.

    • patrick451 2 days ago

      > But field access does not compose easily if you have a nested hierarchy of objects. Your natural choice in the "OOP style" is to write a lot of boiler plate to point to each different field you want to get/set.

      This is a self inflicted problem. Make data public and there is no boilerplate.

  • endgame 2 days ago

    Lenses also let you take interesting alternate perspectives on your data. You can have a lens that indexes into a bit of an integer, letting you get/set a boolean, for example.

  • aap_ 3 days ago

    Immutability is a central concept in functional programming.

  • dullcrisp 3 days ago

    You can uhhh abstract over the property which seems cool if you’re into abstracting things but also probably shouldn’t be the thing you’re abstracting over in application code.

    Or on second look the sibling comment is probably right and it’s about immutability maybe.

  • o11c 3 days ago

    This is equivalent to that for people who are irrationally terrified of mutability, and are willing to abandon performance.

    • majoe 3 days ago

      Counterintuitively Julia recommends the use of immutable data types for performance reasons, because immutability enables more flexible compiler optimisations

      An immutable variable can be savely shared across functions or even threads without copying. It can be created on the stack, heap or in a register, whatever the compiler deems most efficient.

      In the case, where you want to change a field of an immutable variable (the use case of lenses), immutable types may still be more efficient, because the variable was stack allocated and copying it is cheap or the compiler can correctly infer, that the original object is not in use anymore and thus reuses the data of the old variable for the new one.

      Coming from the C++ world, I think immutability by default is pretty need, because it enables many of the optimisations you would get from C++'s move semantics (or Rust's borrow checker) without the hassle.

      • leiroigh 2 days ago

        There is nothing counter-intuitive or julia-specific about it:

        Fastest way is to have your datastructure in a (virtual) register, and that works better with immutable structures (ie memory2ssa has limitations). Second fastest way is to have your datastructure allocated on the heap and mutate it. Slowest way is to have your datastructure allocated on the heap, have it immutable, copy it all the time, and then let the old copies get garbage collected. The last slowest way is exactly what many "functional" languages end up doing. (exception: Read-copy-update is often a very good strategy in multi-threading, and is relatively painless thanks to the GC)

        The original post was about local variables -- and const declarations for local variables are mostly syntactic sugar, the compiler puts it into SSA form anyway (exception: const in C if you take the address of the variable and let that pointer escape).

        So this is mostly the same as in every language: You need to learn what patterns allow the current compiler version to put your stuff into registers, and then use these patterns. I.e. you need to read a lot of assembly / llvm-IR until you get a feeling for it, and refresh your feelings with every compiler update. Most intuitions are similar to Rust/clang C/C++ (it's llvm, duh!), so you should be right at home if you regularly read compiler output.

        Julia has excellent tooling to read the generated assembly/IR; much more convenient than java (bytecode is irrelevant, you need to read assembly or learn to read graal/C2 IR; and that is extremely inconvenient).

    • BoiledCabbage 3 days ago

      It's a similar idea to map() but for more complex objects than arrays. When people use "map" in Javascript (or most any other language that supports it) do they do so because "they are terrified of mutability, and are willing to abandon performance?"

      Your comment reads like the response of someone who is struggling to understand a concept.

      • o11c 3 days ago

        Only the get half is `map`-like. In combination it's more like a property descriptor, which is far easier to understand and much more efficient.

        And, if it wasn't obvious, it's only the `set` half where lenses suck for performance.

    • pasteldream 3 days ago

      Immutability gives you persistence, which can be practically useful. It’s not just fear.

      • leiroigh 2 days ago

        Yes. O(1) snapshots are awesome! Persistent datastructures are a monumental achievement.

        But that comes at a performance price, and in the end, you only really need persistent datastructures for niche applications.

        Good examples are: ZFS mostly solves write amplification on SSD (it almost never overwrites memory); and snapshots are a useful feature for the end user. (but mostly your datastructures live in SRAM/DRAM which permit fast overwriting, not flash -- so that's a niche application)

        Another good example is how julia uses a HAMT / persistent hash-map to implement scoped values. Scoped values are inheritable threadlocals (tasklocal; in julia parlance, virtual/green thread == task), and you need to take a snapshot on forking.

        Somebody please implement that for inheritable threadlocals in java! (such that you can pass an O(1) snapshot instead of copying the hashmap on thread creation)

        But that is also a niche application. It makes zero sense to use these awesome fancy persistent datastructures as default everywhere (looking at you, scala!).

verdverm 3 days ago

Was hoping this was data lenses, like cambria from ink&switch

https://www.inkandswitch.com/cambria/

Not sure how "A Lens allows to access or replace deeply nested parts of complicated objects." is any different from writing a function to do the same?

Julia curious, very little experience

  • laszlokorte 3 days ago

    Yes lenses are pairs of functions that allow bidirectional data transformations. One function acts like a getter and one function acts like a setter. The signatures of the functions are designed to compose nicely. This allows to compose complex transformations from a few simple building blocks.

    In the end it is really just function composition but in a very concise and powerful way.

    In your cambria example the lens is defined as yaml. So this yaml needs to be parsed and interpreted and the applied to the target data. The rules that are allowed to be used in the yaml format must be defined somewhere. With pure functional lenses the same kind of transformation rules can be defined just by function composition of similar elemental rules that are itself only pairs of functions.

    • verdverm 3 days ago

      To be clear, cambria is not mine

      > So this yaml needs to be parsed and interpreted and the applied to the target data. The rules that are allowed to be used in the yaml format must be defined somewhere.

      I wasn't trying to get into the specific technology. The Julia still needs to be parse, and while Yaml has them separate, CUE does not (which is where I write things like this and have building blocks for lenses [1], in the conceptual sense)

      In the conceptual sense, or at least an example of one, lenses are about moving data between versions of a schema. It sounds like what you are describing is capable of this as well? (likely among many other things both are capable of)

      [1] https://hofstadter.io/getting-started/data-layer/#checkpoint...

      • laszlokorte 3 days ago

        Yes functional lenses are very good at transforming between between schematas.

        You can think of it as an functional programming based embedded domain specific language for transforming immutable data structures into each other. Sure there are other ways to do it but its like generalized map/filter/reduce class of functions vs doing the same imperatively by hand or in other ways

        • verdverm 3 days ago

          hmm, that makes it sound closer to CUE, where all values are immutable

          CUE is in the logical family with Prolog and is not Turing Complete

  • versteegen 3 days ago

    Lenses make it more convenient to use immutable structs, which Julia encourages (particularly as they unlock various optimisations).

  • antidamage 3 days ago

    It's about the annotation triggering a code pre-processor.

    For example in Lombok, the @Data annotation will create a getter and a setter for every private member, and @Getter and @Setter will do the individual methods respectively.

    Annotating a class will do every private member, or you can annotate a specific member.

    A lens is a shortcut to making a getter/setter for something several elements deep, where instead of calling:

    `parentObject.getChild().setChildAttribute()` you can call: `parentObject.setChildAttributeViaLens()`

    and not need to write multiple functions in both classes, or even use multiple annotations.

max_ 3 days ago

Guys, What's you're opinion on Julia?

I am thinking of using it for data science work.

Any draw backs? or advantages I should know about?

  • tombert 3 days ago

    I don’t use it for much data science, and I only have used it for one project (to speed up my dad’s Octave code), but I actually really liked it. Generally speaking things were extremely fast, even without any specific optimization on my end. Like, without hyperbole, a direct port of my dad’s code was between 50-100x faster in most cases, and when I started optimizing it I ended up getting it going about 150x faster.

    There was a bit of weirdness with the type system with its dynamic dispatch making things slow, but specifying a type in function headers would resolve those issues.

    I also thought that the macro system was pretty nice; for the most part I found creating custom syntax and using their own helpers was pretty nice and easy to grok.

    Since I don’t do much data work I haven’t had much of an excuse to use it again, but since it does offer channels and queues for thread synchronization I might be able to find a project to use it for.

  • tomtom1337 3 days ago

    If you’re young, new to data science and hoping to get a job in it after some time, then I absolutely recommend learning Python instead of Julia.

    But if you are just interested in learning a new language and trying it in data science OR are not currently looking to enter the data science job market, then by all means: Julia is great and in many ways superior to Python for data science.

    It’s just that «everyone» is doing data science in Python, and if you’re new to DS, then you should also know Python (but by all means learn Julia too!).

  • Darmani 3 days ago

    I've worked in about 40 languages and have a Ph. D. in the subject. Every language has problems, some I like, some I'm not fond of

    There is only one language that I have an active hatred for, and that is Julia.

    Imagine you try to move a definition from one file to another. Sounds like a trivial piece of organization, right?

    In Julia, this is a hard problem, and you can wind up getting crashes deep in someone else's code.

    The reason is that this causes modules that don't import the new file to have different implementations of the same generic function in scope. Julia features the ability to run libraries on data types they were never designed for. But unlike civilized languages such as C++, this is done by randomly overriding a bunch of functions to do things they were not designed to do, and then hoping the library uses them in a way that produces the result you want. There is no way to guarantee this without reading the library in detail. Also no kind of semantic versioning that can tell you when the library has a breaking change or not, as almost any kind of change becomes a potentially-breaking change when you code like this.

    This is a problem unique to Julia.

    I brought up to the Julia creators that methods of the same interface should share common properties. This is a very basic principle of generic programming.

    One of them responded with personal insults.

    I'm not the only one with such experiences. Dan Luu wrote this piece 10 years ago, but the appendix shows the concerns have not been addressed: https://danluu.com/julialang/

    • JanisErdmanis 2 days ago

      It is discouraged to override internal internal functions, hence, one often only needs to monitor the public API changes of packages as in every other programming language. Updates for packages in my experience rarely had broke stuff like that. Updates in Base somrimes can cause issues like that, but those are thoroughly tested on most popular registered packages before a new Julia version is released.

      Interfaces could be good as intermediaries and it is always great to hear JuliaCon talks every year on the best ways to implement them.

      > Imagine you try to move a definition from one file to another. Sounds like a trivial piece of organization, right?

      In my experience it’s most trivial. I guess your pain points may have come by making each file as a module and then adding methods for your own types in different module and then moving things around is error prone. The remedy here sometimes is to not make internal modules. However the best solution here is to write integration tests which is a good software development practice anyway.

    • oconnore 3 days ago

      I guess you had a bad experience, but this hasn’t been an issue for me using it for many years now.

  • dandanua 3 days ago

    As a language, Julia is strictly superior to Python in many ways (e.g., it's easy to write multithreading code). Currently Python still can be preferable due to its huge ecosystem and super optimised libraries for data science. But Julia is catching fast.

  • pjmlp 3 days ago

    Drawback, it isn't Python in community scale, remember it was born in 1996, so it had enough time to grow.

    Advantages, it is yet another Lisp like language in a Algol like syntax, like Dylan and Wolfram Alpha, also another one with multi-methods support a la Common Lisp, Dylan, Clojure, and whoever else implements a subset of CLOS.

    It was designed from the ground up to be compiled with a JIT, not as an afterthought.

    These are the kinds of places making use of it,

    https://juliahub.com/case-studies

  • wolvesechoes 3 days ago

    I would love to love it, but for now I hate to hate it.

    Dynamic yet performant language with LISPy features and focus on numerical applications? Count me in.

    But then I found out that execution of some ideas is rather bad, and some ideas are great on paper, but not in practice. For example, debugging experience is a joke even compared to SBCL debugger (and you of course need to download package Debugger.jl, because who needs a good debugger in base language implementation?) And multiple dispatch is a very powerful feature... I sometimes think it is too powerful.

    There is no proper IDE, and VSC extension was slow and unstable when I tried it (last time few months ago).

    But my biggest gripe is with people developing Julia. Throughout the years every time people complained about something ("time to first plot", static compilation etc.) the initial responses were always "you are holding it wrong", "Julia is not intended to be used this way", "just keep you REPL open", "just use this 3rd party package", only to few releases later try to address the problem, sometimes in suboptimal way. It is nice that in the end they try to deliver solutions, but it seems to me it always require constant push from the bottom.

    Moreover, I am quite allergic to marketing strategies and hype generation:

    Julia doesn't run like C when you write it like Python. It can be very fast, but then it requires quite detailed tuning.

    You don't need to think about memory management, until you need to, because otherwise allocations kill your performance.

    You can omit types, until you can't.

    Those things are quite obvious, but then why produce so much hype and bullshit people through curated and carefully tuned microbenchmarks?

    It maybe solves two-language problem, but in return it gives you million packages issue. You need a package to have a tolerable debugger (Debugger.jl), you need a package to have static and performant arrays (StaticArrays.jl), you need a package to have enums worth using, you need a package to hot-reload your code without restarting REPL (Revise.jl), you need a package to compile you code to an executable (PackageCompiler.jl/StaticCompiler.jl, they started to address that in the last release) etc. And then you need to precompile them on your machine to have reasonable startup time.

    TLDR: Julia is wasted potential.

    • hatmatrix 2 days ago

      As for the tooling, julia-snail on emacs is supposed to be like SLIME for Lisp. But sounds like that isn't your main gripe. Having to load so many packages is a indeed a pain, but it does suggest the core language is rather minimal...

jhoechtl 3 days ago

Is Julia a general purpose programming language? I mean I did check the web site which contains a "General Purpose" section, yet the articles seem to center around "scientific applications".

  • Mikhail_K 2 days ago

    In the release 1.12 they finally implemented the ability to create compact executables, so I would say the answer to your question is "yes".

  • adgjlsfhk1 2 days ago

    it is a general purpose language, but it's happy place is math. Most languages (except Fortan Matlab and R) are very much oriented towards writing web servers/compilers etc, so Julia gets lots of wins in science just by virtue of caring more about math.

    Julia is a completely reasonable general purpose language, but getting people to switch generally requires a ~10x better experience, and Julia can't deliver that for general purpose applications.

dopu 2 days ago

Is set basically syntactic sugar for deepcopying a struct, mutating the specified field, and then returning that deepcopy? Seems like it could be quite slow.

  • pasteldream 2 days ago

    Yes. I’m not sure how slow it is in Julia, but pure functional languages do tend to generate more garbage for this reason. Hopefully the compiler can optimize it away in simple cases.

    Edit: It’s not deepcopying the whole struct, just the parts that need to point to something new. So if you update a.b.c, it will shallow-copy a and a.b, but nothing else.

Archit3ch 2 days ago

Can this be used for Read-Copy-Update (RCU) ?

webdevver 2 days ago

oh... i thought this was going to be about simulating optical lenses and lens physics.

Quitschquat 3 days ago

Is this like setf in lisp?

  • eigenspace 2 days ago

    In the simplest case, yes, but in general no, lenses are more general

waldrews 2 days ago

Now if only the Julia community didn't keep insisting on ligatures and impossible-to-look-up Unicode symbols - the hollow semicolon for compose right to left, seriously?