Feed Aggregator Page 661
Rendered on Thu, 15 Oct 2020 15:33:48 GMT
Rendered on Thu, 15 Oct 2020 15:33:48 GMT
via Elm - Latest posts by @solificati on Thu, 15 Oct 2020 15:05:06 GMT
GADTs and polymorphic variants open a lot of possibilities for us. Also module system is more powerful (really, modules are different thing in OCaml and name is rather poor).
We use GADTs to express flows as data structures and interpreters for them.
Module system helps with parsers and writing isomorphisms for schemas (functions that convert between complex data structures) - we can extend more basic types with implementations so bigger isos/parsers compose better.
Polymorphic variants are mote like a nice tool here and there - they change nothing in big way, but are quite useful for pseudo row types. This is more like stylistic choice and to be honest, we find Elm extensible records good fit for the same use cases.
Also, please note that our recommendations are taking into consideration multiple factors. We chose Reason not only because of “more powerful” type system but also due to genType, interoperability, community and tooling. It is just a result of specific needs.
via Elm - Latest posts by @system system on Thu, 15 Oct 2020 14:05:15 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @rupert Rupert Smith on Thu, 15 Oct 2020 13:51:42 GMT
I get that Purescript is Haskell so has a more powerful type system than Elm. Its been a while since I used OCaml (20 years), what does Reason have modelling wise over Elm? Is the extensible union types? Or the module system perhaps?
via Elm - Latest posts by @solificati on Thu, 15 Oct 2020 13:45:52 GMT
Our recommendation in the team is Reason right now. We also used and allow Purescript for new projects.
Our domain is mostly legal/compliance flows. Think about legal entities and documents in multiple versions and multiple version of schemas and our app transforms it between them - one employee publishes document that should be compliant with one regulation, then another changes it to make it compliant with another schema and make it pending for review in different department …
via Elm - Latest posts by @rupert Rupert Smith on Thu, 15 Oct 2020 13:26:10 GMT
I am curious about this, if Elm did not fit your needs here, what did you use instead that did?
Also, what is the domain we are talking about here?
via Elm - Latest posts by @solificati on Thu, 15 Oct 2020 11:39:27 GMT
By “express” I meant “express in specific style”. Of course it’s still a complete language and I should not use word “cannot” - you of course can, but you need to sacrifice style.
But to be specific - I find Elm lacking with handling polymorphism. For example we’d like to have structures with functions that operate on them, but elements of these structures should be polymorphic.
Typeclasses and/or Modules also provide tools for late polymorphism, where you can define some type behaviour later.
We also model a lot of flows in our apps (think about document flow in organisation) and GADTs are very useful for this.
I didn’t compare TS and Elm in this regard. We use TS for our UI and if somebody wants to still work with javascript libs or idioms, TS is unmatched here. TS tries to be subset of javascript and obey by its laws and best practices. Elm and TS are different tools with different goals.
via Elm - Latest posts by @grrinchas Dainius on Thu, 15 Oct 2020 11:31:28 GMT
Do you mean that the antipattern created had to do with many untyped functions inside
Untyped functions add some weight to the problem but is not the main issue.
let
clause allows an imperative way to structure an application. In Java let’s say you define variables and then use them. Here we doing very similar. Although let clauses are expressions and you can define functions which accepts some arguments, most often you don’t do it. Instead, you are using parameters which comes from upper-level function.
Then, the next function you will put inside the let
clause as well, cause it depends on the value of that previous function, and so on. Leave out function signatures, and your code literally will look like imperative statements.
These functions are very hard to test, reuse, refactor, read and maintain.
via Elm - Latest posts by @tgelu Gelu Timoficiuc on Thu, 15 Oct 2020 10:37:12 GMT
let
is of course, not imperative in elm, it is an expression that evaluates to a value. Do you mean that the antipattern created had to do with many untyped functions inside let
?
For example, TEA architecture is simple, but when you have to make a component library using this architecture, then they just can’t figure out how to do it. I have seen so many weirds patterns, which just makes codebase unmaintainable and pain to work with.
I think this is one of the most underestimated challenges in elm.
I have watched many talks and read about how one should change his “component” mindset to something else, to collections of functions. And many times, with the help of good examples I thought I grasped it. But after years of elm, I am still unsure how to make something of good quality that serves the purpose of a component library. Even if it isn’t true, React still feels ideal for building a set of UI tools. With elm maybe there is no one pattern, which is alright, but one has to deal with this when selling elm to any team that is looking for “component” patterns.
I have found this very difficult to do.
via Elm - Latest posts by @grrinchas Dainius on Thu, 15 Oct 2020 09:29:15 GMT
This is a very interesting discussion.
I have used Elm in my previous and now at my current company. When I joined the company, out of 8 devs only me who was really into Elm. There were a couple of devs who didn’t mind doing Elm, but there were who would never do it. Through 4 years of elm experience in two different companies, here are the main reasons why people would migrate from Elm:
Most people can grasp functional programming concepts, but what they are struggling is the application of those concepts to the UI. For example, TEA architecture is simple, but when you have to make a component library using this architecture, then they just can’t figure out how to do it. I have seen so many weirds patterns, which just makes codebase unmaintainable and pain to work with.
Elm allows using of imperative programming concepts such as let
statements. Devs from imperative programming background they really love let
clauses. This antipattern is so annoying. It allows for creating huge functions. I have seen a function of 1000s of lines. Eventually, I ended up writing elm-review
rule for not allowing let
clauses at all.
Not following good programming practices, like reusability, modularity, testing and so on. People are just happy that after one day of struggling with the simple task to implement a button it actually works. They won’t spend another 3 days to abstract that button, write tests, maybe even put in the library. Unless someone will force them. This makes codebase even worse.
The more dev is experienced the more likely he will use known, easy and quick plug&play solution. Elm is the opposite from this perspective.
So in conclusion. There are not many people who would try to learn Elm in the first place. Those who are willing, they either:
You can reduce migration away from Elm, by writing tons of documentation, setting right tooling, using CI, introducing Elm gradually to your peers, after a while, they will start to see how easily you are managing thousands of lines of front end code. Then they start to trust Elm as a good solution, and won’t abandon after you leave the company.
via Planet Lisp by on Wed, 14 Oct 2020 20:52:55 GMT
This is a library by @Shinmera to find out which fonts are known to OS and where their files are located.
Here is how you can list all "Arial" fonts and find where the "bold" version is located:
POFTHEDAY> (org.shirakumo.font-discovery:list-fonts :family "Arial")
(#<ORG.SHIRAKUMO.FONT-DISCOVERY:FONT "Arial" ROMAN REGULAR NORMAL>
#<ORG.SHIRAKUMO.FONT-DISCOVERY:FONT "Arial" ITALIC REGULAR NORMAL>
#<ORG.SHIRAKUMO.FONT-DISCOVERY:FONT "Arial" ROMAN BOLD NORMAL>
#<ORG.SHIRAKUMO.FONT-DISCOVERY:FONT "Arial" ITALIC BOLD NORMAL>)
POFTHEDAY> (third *)
#<ORG.SHIRAKUMO.FONT-DISCOVERY:FONT "Arial" ROMAN BOLD NORMAL>
POFTHEDAY> (org.shirakumo.font-discovery:file *)
#P"/System/Library/Fonts/Supplemental/Arial Bold.ttf"
It is also possible to find a single font filtering it by family, slant and other parameters:
POFTHEDAY> (org.shirakumo.font-discovery:find-font :family "PragmataPro")
#<ORG.SHIRAKUMO.FONT-DISCOVERY:FONT "PragmataPro" ROMAN REGULAR NORMAL>
POFTHEDAY> (describe *)
#<ORG.SHIRAKUMO.FONT-DISCOVERY:FONT "PragmataPro" ROMAN REGULAR NORMAL>
[standard-object]
Slots with :INSTANCE allocation:
FILE = #P"/Users/art/Library/Fonts/PragmataProR_0828.ttf"
FAMILY = "PragmataPro"
SLANT = :ROMAN
WEIGHT = :REGULAR
SPACING = NIL
STRETCH = :NORMAL
However, I found this library is still unstable on OSX and sometimes crashes somewhere in the CFFI code. @Shinmera has fixed some of these errors but some of them are still uncaught.
Read the full documentation on it here:
via Elm - Latest posts by @Peter peter renshaw on Thu, 15 Oct 2020 00:55:16 GMT
Management. I spied an interesting twitter clip of Steve Jobs talking about how companies reward Sales/Marketing at the expense of Product people till the company is guided by people who have no idea what makes a successful product let alone how to built a product.
This "Institutional Product Rot” leads to poor decisions. A Summary is on twitter [0] but I found the original full length interview , “Steve Jobs Lost 1995 Interview” [1]
[0] “Jobs explaining Management/Sales & Markeing who don’t understand how to build Products” https://twitter.com/mikeabbink/status/1315876303455870978
[1] "Steve Jobs Lost 1995 Interview” https://www.youtube.com/watch?v=thxg1-iLRT8
via Elm - Latest posts by @system system on Wed, 14 Oct 2020 17:56:41 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @system system on Wed, 14 Oct 2020 14:16:22 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @csuvikv Viktor Csuvik on Wed, 14 Oct 2020 13:15:46 GMT
I see the following stack trace after building my elm project:
Some new packages are needed. Here is the upgrade plan.
Install:
Bogdanp/elm-combine 3.1.1
NoRedInk/elm-decode-pipeline 3.0.1
Skinney/murmur3 2.0.6
coreytrampe/elm-vendor 2.0.3
elm-community/lazy-list 1.0.0
elm-community/random-extra 2.0.0
elm-community/shrink 2.0.0
elm-community/undo-redo 2.0.0
elm-lang/core 5.1.1
elm-lang/dom 1.1.1
elm-lang/html 2.0.0
elm-lang/http 1.0.0
elm-lang/keyboard 1.0.1
elm-lang/lazy 2.0.0
elm-lang/mouse 1.0.1
elm-lang/navigation 2.1.0
elm-lang/virtual-dom 2.0.4
evancz/elm-markdown 3.0.2
evancz/url-parser 2.0.1
ir4y/elm-dnd 2.0.0
krisajenkins/remotedata 4.5.0
rtfeldman/elm-css 14.0.0
rtfeldman/elm-css-util 1.0.2
rtfeldman/hex 1.0.0
thebritican/elm-autocomplete 4.0.3
Do you approve of this plan? [Y/n] y
Starting downloads...
ÔŚĆ coreytrampe/elm-vendor 2.0.3
ÔŚĆ NoRedInk/elm-decode-pipeline 3.0.1
ÔťŚ Skinney/murmur3 2.0.6
ÔŚĆ Bogdanp/elm-combine 3.1.1
ÔŚĆ elm-lang/html 2.0.0
ÔŚĆ elm-lang/lazy 2.0.0
ÔŚĆ elm-community/lazy-list 1.0.0
ÔŚĆ elm-community/undo-redo 2.0.0
ÔŚĆ elm-lang/virtual-dom 2.0.4
ÔŚĆ rtfeldman/elm-css-util 1.0.2
ÔŚĆ ir4y/elm-dnd 2.0.0
ÔŚĆ elm-community/random-extra 2.0.0
ÔŚĆ elm-lang/http 1.0.0
ÔŚĆ evancz/elm-markdown 3.0.2
ÔŚĆ elm-lang/mouse 1.0.1
ÔŚĆ elm-lang/core 5.1.1
ÔŚĆ elm-community/shrink 2.0.0
ÔŚĆ krisajenkins/remotedata 4.5.0
ÔŚĆ rtfeldman/hex 1.0.0
ÔŚĆ elm-lang/dom 1.1.1
ÔŚĆ evancz/url-parser 2.0.1
ÔŚĆ elm-lang/keyboard 1.0.1
ÔŚĆ elm-lang/navigation 2.1.0
ÔŚĆ thebritican/elm-autocomplete 4.0.3
Error: The folÔŚĆlowing HTTP rtfeldman/elm-css 14.0.0
request failed.
<https://github.com/Skinney/murmur3/zipball/2.0.6/>
404: Not Found
I saw this post: Skinney/murmur3 not downloading but there the issue is solved only for elm 0.19.0. Any ideas how can I make it work for 0.18.0?
via Lisp, the Universe and Everything by Vsevolod Dyomkin on Wed, 14 Oct 2020 09:02:00 GMT
RDF* is a new group of standards that aims to bridge the gap between RDF and property graphs. However, it has taken an "easy" route that made it ambiguous and backward incompatible. An alternative approach that doesn't suffer from the mentioned problems would be to introduce the notion of triple labels instead of using the embedded triples syntax.
Our CTO used to say that standards should be written solely by those who are implementing them. And although this statement may be a little too extreme in some cases, it's a good rule of thumb. And the main reason for that is not that it will make the standards simple to implement. Moreover, I don't want to argue that allowing a simple implementation is the main requirement for a standard. What's more important is that the implementors have a combined exposure to the whole variety of potential use cases both from the user feedback and own experience of being a consumer of their own dogfood. Besides, it doesn't harm that if something is simple to implement it's also simple to understand, reason about, and use.
Obviously, given all power to the implementers might lead to abuse of this power, but it's a second-order problem and there are known ways to mitigate it. Primarily, by assembling representatives of several implementations in a committee. An approach that is often frowned upon by hotheads due to alleged bureaucracy and the need for compromise, yet leading to much more thought-out and lasting standards. A good example of such is the Common Lisp standard.
But I digressed, let's talk about RDF*. This is the major candidate to solve the problem of RDF triple reification that is crucial to basic compatibility between RDF and property graph representations. In short, RDF defines the simplest elegant abstraction for representing any kind of data — a triple that comprises a subject, predicate, and object. Triples are used to represent facts. Besides, there's a third component to a triple called a graph. So, in fact, the historic name triple, in the realm of RDF triple-stores, currently stands for a structure of 4 elements. The graph may be used to group triples. And despite the beauty of the simple and elegant concept of a triple, in theory, having this fourth component is essential for any serious data modeling.
Now, we know how to represent facts and arbitrarily group them together. So, the usual next-level question arises: how to represent facts about facts? Which leads to the problem of reification. Let's consider a simple example:
:I :have :dream .
This is a simple triple represented in the popular Turtle format. As RDF deals with resources, it assumes that there's some default prefix defined elsewhere (let's say it's https://foo.com/). The sam triple may be represented in the basic NTriples format like this:
<https://foo.com/I> <https://foo.com/have> <https://foo.com/dream> .
What if we want to express the facts that it is a quote by Martin Luther King and that it was uttered in 1963? There are at least 3 ways to approach it:
_:fact1 rdf:subject :I ;
rdf:predicate :have ;
rdf:object :dream ;
meta:author wiki:Martin_Luther_King_Jr. ;
meta:date "1963" .
This works but, as I said, is conceptually wrong. It's the RDF's Java-style cancer of the semicolon. It leads to storage waste and poor performance.
:I :have#1 :dream .
:have#1 meta:author wiki:Martin_Luther_King_Jr. ;
meta:date "1963" .
This is complete nonsense as it makes SPARQL queries unreasonably complex unless you implement special syntax that will ignore the #1
suffix, in the query engine.
:t1 { :I :have :dream . }
:t1 meta:author wiki:Martin_Luther_King_Jr. ;
meta:date "1963" .
Here, we use another (there's many more of them :) ) RDF format — TriG, which is an extension to Turtle for representing graphs. :t1
is a unique graph that is associated with our triple, and it is also used as a subject resource for metadata triples. This approach also has minor drawbacks, the most important of which is that grouping triples needs more overhead. We'll have to add an additional triple if we'd like to express that :t1
belongs to a graph :g1
:
:t1 meta:graph :g1 .
On the flip side, that will open the possibility of putting the triple into more than a single graph. In other words, now grouping may be expressed as yet another triple property, which it, in fact, is.
<< :I :have :dream >> meta:author wiki:Martin_Luther_King_Jr. ;
meta:date "1963" .
Besides, you can also embed a triple into a subject:
wiki:Martin_Luther_King_Jr. meta:quote << :I :have :dream >> .
And do nesting:
<< wiki:Martin_Luther_King_Jr. meta:quote << :I :have :dream >> >> meta:date "1963" .
Neat, at first glance... Yet, there are many pitfalls of this seemingly simple approach.
The first obvious limitation of this approach is that this syntax is not able to unambiguously express all the possible cases. What if we want to say something like this:
<< << :I :have :dream >> meta:author wiki:Martin_Luther_King_Jr. ;
meta:date "1963" >>
meta:timestamp "2020-10-13T01:02:03" .
Such syntax is not specified in the RFC and it's unclear if it is allowed (it seems like it shouldn't be), although this is perfectly legit:
<< << :I :have :dream >> meta:author wiki:Marthin_Luther_King_Jr. >>
meta:timestamp "2020-10-13T01:02:03" .
What about this:
wiki:Martin_Luther_King_Jr. meta:quote << :I :have :dream >> .
wiki:John_Doe meta:quote << :I :have :dream >> .
Do these statements refer to the same :I :have :dream .
triple or two different ones? RDF* seems to assume (although the authors don't say that anywhere explicitly) that each subject-predicate-object combination is a unique triple, i.e. there can be no duplicates. But RDF doesn't mandate it. So, some triple stores support duplicate triples. In this case, there is no way to express referencing the same embedded triple in object position from multiple triples in Turtle*.
Moreover, there's a discussion in the RDF* workgroup whether the embedded triples are, actually, asserted or not. I.e. in the following example:
wiki:Martin_Luther_King_Jr. meta:quote << :I :have :dream >> .
does should the triple :I :have :dream
be treated as toplevel triples or differently. Should the SPARQL query like SELECT ?obj { :I :have ?obj }
return :dream
or nothing, so that only SELECT ?obj { ?s ?p << :I :have ?obj >> }
be an acceptable way of accessing the embedded triple? We're now questioning the most basic principles of RDF...
And I haven't even started talking about graphs (for there's no TriG* yet). With graphs, there're more unspecified corner cases. For instance, the principal question is: can an embedded triple have a different graph than the enclosing property triple. It seems like a desirable property, moreover, it will be hard to prevent the appearance of such situations from directly manipulating the triple store (and not by reading serialized TriG* statements).
This is, actually, the major problem with Turtle: it makes an impression that it exists in a vacuum. To see RDF in context, we have to understand that the core of RDF comprises a group of connected standards: NTriples/NQuads, Turtle/TriG, and SPARQL. Turtle is a successor to NTriples that makes it more human-friendly, but all of them build on the same syntax. And this syntax is used by SPARQL also. Yet, there's no NTriples* and it's unclear whether it can exist. GraphDB implements a hack by embedding the triple (or rather its hash, but that doesn't matter much) in a resource (like <urn:abcdefgh>
), but, first of all, that's ugly, and, secondly, it also assumes no duplicates. Yet, NTriples is the basic data interchange format for RDF, and forsaking it is a huge mistake. There's also no TriG* yet as I mentioned. Another sign that RDF* is mostly a theoretical exercise. TriG* can be defined as an extension to TriG with Turtle* syntax, but I have already briefly mentioned the issue it will face.
To sum up, the main deficiencies of Trutle* are:
And, in my opinion, they originate from the desire to make the most obvious UI not paying attention to all other considerations at all.
What's the alternative? Well, probably, Turtle* will end up being implemented in some way or another by all the triple-store vendors. Although, I expect the implementations to be quite incompatible due to the high level of underspecification in the RFC.
Yet, you don't have to wait for Turtle* as graph-based reification is already available and quite usable.
Also, if we still had a choice to define an extension to RDF with the same purpose as RDF*, I'd take another quite obvious route. It may be less sexy, but it is at least as simple to understand and much more consistent both within itself and with other RDF standards. Moreover, a similar approach is already part of RDF — blank nodes.
Blank nodes are resources that are used just as ids:
We could as well use a blank node instead of http://foo.com/t1
as our graph label resouce:
_:b1 { :I :have :dream . }
_:b1 meta:author wiki:Martin_Luther_King_Jr. ;
meta:date "1963" .
The underscore syntax is for blank nodes so _:b1
will create a graph node that is used to connect other nodes together but we don't care about its representation at all.
Similarly to blank nodes syntax, we could introduce triple label syntax:
^:t1 :I :have :dream .
This statement will mean that our triple has a t1
label. Now, we could add metadata to that label — in exactly the same manner as with graph-based reification (*:t1
is a "dereference" of the triple label):
*:t1 meta:author wiki:Martin_Luther_King_Jr. ;
meta:date "1963" .
This would map directly to the implementation that will be able to unambiguously link the triple to its properties. Also, it would enable this:
wiki:Martin_Luther_King_Jr. *:t1 .
wiki:John_Doe meta:quote *:t1 .
And defining NTriples/NQuads becomes possible, as well. This is an NQuads triple labelled t1
.
^:t1 <https://foo.com/I> <https://foo.com/have> <https://foo.com/dream> <https://foo.com/g1> .
Alas, this simple and potent approach was overlooked for RDF*, so now we have to deal with a mess that is both hard to implement and will likely lead to more fragmentation.
via Elm - Latest posts by @tgelu Gelu Timoficiuc on Wed, 14 Oct 2020 08:10:31 GMT
On the other hand, Elm type system is limited and we can’t express our domain through it. Our apps deal with complex information and we’d like to have more tools at our disposal than records and tagged sums. We’d like to enforce laws and relations between data, even if we need to pay with steeper learning curve.
I am asking this because I have always, with no exception, found it much simpler and practical to model complex models in Elm rather than TS. So much so that I often start to model something in Elm only to get clarity on how to properly do it in TS, and this often requires a bunch of helper types/libraries to do right.
To tie it back to the thread’s topic, if it happens that the project I work on moves away from Elm, I could not see how data modelling is the reason for that. I am curious to learn how that happened to you.
via Elm - Latest posts by @system system on Wed, 14 Oct 2020 02:23:12 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @Sebastian Sebastian on Tue, 13 Oct 2020 23:50:03 GMT
Using elm-test src/
worked, it found all the tests. It was a bit slower 2.5 secs without the glob vs 1.5secs with the glob.
via Elm - Latest posts by @system system on Tue, 13 Oct 2020 21:04:44 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @ursi Mason Mackaman on Tue, 13 Oct 2020 18:50:34 GMT
I believe most likely it does. Cmd.none == Cmd.none
does work but trying to compare commands in general will result in an error.