Feed Aggregator Page 650
Rendered on Fri, 25 Sep 2020 09:03:22 GMT
Rendered on Fri, 25 Sep 2020 09:03:22 GMT
via Elm - Latest posts by @kraklin Tomáš Látal on Fri, 25 Sep 2020 08:18:23 GMT
Ok, so it has been almost 10 days for the plugin out in the wild and so far no major problems have risen which means that I can proceed with final touches and release it officially soon Thanks for all your help so far.
If you have anything regarding the plugin beta version (any feature requests/things to do better/something that was not clear to you) please write me the feedback on that with:
Thanks again to all testers
via Elm - Latest posts by @jxxcarlson James Carlson on Fri, 25 Sep 2020 04:45:10 GMT
@rupert, I just cloned your parser-recoverable
repo and tried the Array example. Beautiful! Am going to study your code and see if I can use it to improve the MiniLaTeX parser. Should give better results than what I have now (chunking as above + error messages using Parser.Advanced).
I should be able to introduce parser-recoveralble
gradually so as not to have to rewrite the whole parser before seeing benefits from it.
via Elm - Latest posts by @jxxcarlson James Carlson on Fri, 25 Sep 2020 04:33:19 GMT
Very glad to have found this discussion. Super interesting and helpful.
I use a very simple chunking approach, “logical paragraphs” in MiniLaTex, so that the parser can keeping going and give the user as much rendered output as possible.
A logical paragraph is either an ordinary paragraph (blank lines above and below) or an outer \begin ... \end
block in LaTeX. This approach confines parse failures to logical paragraphs, which helps a lot, but is not sufficiently fine-grained. With one exception (below) this means that in most cases at most one paragraph is not properly rendered.
The biggest problem with the logical paragraph approach as it stands is that if the user types \begin{whatever}
, the text will generate an unpleasant error until the matching \end{whatever}
is typed: all the text after the \begin{whatever}
is seriously messed up/missing in the rendered output. I had implemented a solution like the one @klazuka suggested above … have the editing system supply the \end{whatever}
. However, it had to put that aside for the time being because of the jumping-cursor problem with text edit fields. Bummer! As soon as I get elm-editor in good enough shape, to integrate with the app, it will be possible to use this fix.
Hope to understand the above discussion to to more fine-grained recovery.
via Elm - Latest posts by @meowgorithm Christian Rocha on Thu, 24 Sep 2020 19:21:36 GMT
Exactly what I was looking for — thank you!
via Elm - Latest posts by @lydell Simon Lydell on Thu, 24 Sep 2020 19:20:19 GMT
If your type only has one variant* you can do it like this:
type A = A Int Int
getFirst : A -> Int
getFirst (A x _) =
x
* More precisely, if your pattern is exhaustive.
via Elm - Latest posts by @meowgorithm Christian Rocha on Thu, 24 Sep 2020 18:51:43 GMT
I’m wondering if there’s a way to get values out of a custom type, in an example similar to the following, without using a case switch. Wondering if I’m missing something obvious.
type A = A Int Int
getFirst : A -> Int
getFirst a =
case A of
A x _ -> x
I know that the above would be better modeled as a tuple, and a record work do as well, but this is just a trite example to help illustrate my question.
via Elm - Latest posts by @evancz Evan on Thu, 24 Sep 2020 18:40:31 GMT
I typically look at this benchmark to get a feeling for how different rendering systems compare. Try going to the interactive results and putting in:
The aggregated results I am seeing are as follows:
| vanillajs | elm | svelte | react | react-redux |
|-----------|------|--------|-------|-------------|
| 1.00 | 1.25 | 1.28 | 2.09 | 2.18 |
It looks like Elm is a bit faster than Svelte on performing operations.
| vanillajs | elm | svelte | react | react-redux |
|-----------|------|--------|-------|-------------|
| 1.01 | 1.19 | 1.00 | 1.65 | 2.18 |
The start up time is a bit slower than Svelte, but it looks like that is mostly to do with code size. So I would not expect these numbers to look as favorable for a Svelte in project with a normal number of dependencies.
It’s good that a rendering library is small, but that gets washed out if your other dependencies end up being big. So rather than thinking of Elm vs Svelte, I think Elm vs JS is the more sensible comparison.
It is pretty easy to cut out tons of functions from dependencies in Elm, while it is generally not practical to get close to that with JS modules. I talk a bit more about why the language is important for this comparison in this post.
So I personally think choosing a virtual DOM implementation based on size alone only makes sense when comparing JS projects to JS projects. Maybe it’s possible to make the Elm implementation even smaller, but if the goal is to reduce code size in practice, I think focusing on code generation more generally would probably be more rewarding.
| vanillajs | elm | svelte | react | react-redux |
|-----------|------|--------|-------|-------------|
| 1.00 | 1.55 | 1.33 | 2.11 | 2.69 |
Elm also appears to allocate a bit more than Svelte. Perhaps that could be trimmed down.
One idea is to detect static Html msg
values and move them out of functions to the top-level. That would mean they are allocated just once, whereas they may otherwise be allocated many times as different view
functions are called. (The virtual DOM implementation detects when nodes are equal by reference, so this would skip diffing as well.)
The trade off with that idea is that (1) you do more work when the program starts and (2) the memory sits around for the whole duration of the program. These factors could be an issue in large enough programs, so it’d definitely take some special care to make sure this isn’t negative overall. (E.g. should there be a cache containing N kb of the most commonly used static nodes? Does that add too much overhead to be worth it? How do you set N? Etc.)
My sense is that projects significantly slower than Elm and Svelte are working fine for a lot of people. Even if we doubled the current performance somehow, I do not know if that is such a big deal to most people right now.
But yeah, if someone thought it would be a big deal, I would look into moving static Html msg
values out of functions. That could definitely get the allocation numbers down, and maybe improve perf a bit as well. The hard part is finding a design that has predictable performance, without filling up the heap too much for people with very large programs. Maybe the naive design of just making them all top-level is fine! Someone would have to do a proof of concept to start collecting data on that!
via Elm - Latest posts by @Naserdin Naserdin on Thu, 24 Sep 2020 14:32:04 GMT
Since Elm is compile to js code, is it possible to go svelte’s approach to directly compile to raw dom manipulation, and no vdom?
via Erlang.org News RSS by on Wed, 23 Sep 2020 00:00:00 GMT
img src=http://www.erlang.org/upload/news/Erlang/OTP 23.1 is a the first maintenance patch release for OTP 23, with mostly bug fixes as well as a few improvements.
A vulnerability in the httpd module (inets application) regarding directory traversal that was introduced in OTP 22.3.1 and corrected in OTP 22.3.4.6. It was also introduced in OTP 23.0 and corrected in OTP 23.1 The vulnerability is registered as CVE-2020-25623.
The vulnerability is only exposed if the http server (httpd) in the inets application is used. The vulnerability makes it possible to read arbitrary files which the Erlang system has read access to with for example a specially prepared http request.
Adjust /bin/sh to /system/bin/sh in scripts when installing on Android.
Changes in build system to make it build for macOS 11.0 with Apple Silicon. Also corrected execution of match specs to work on Apple Silicon.
http://erlang.org/download/OTP-23.1.README
Pre built versions for Windows can be fetched here:
http://erlang.org/download/otp_win32_23.1.exe
http://erlang.org/download/otp_win64_23.1.exe
Online documentation can be browsed here:
http://erlang.org/documentation/doc-11.1/doc
The Erlang/OTP source can also be found at GitHub on the official Erlang repository,
https://github.com/erlang/otp
via Elm - Latest posts by @system system on Thu, 24 Sep 2020 10:42:57 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @ymtszw Yu Matsuzawa on Thu, 24 Sep 2020 03:18:50 GMT
In our team we do not insert all the vendor prefixes in automatic manner like we do with autoprefixer (rather, we cannot). It is indeed a shortcoming of current style generation procedure of elm-css (= style-components-ish way). BUT, it definitely provides us a convenient and safe way of managing css in Elm so we decided to live with that.
Css shows (in many cases) “can do one thing in several different ways” aspect, so what we currently do is:
property
function like property "-webkit-***"
only where we need it.
Css.Extra.someFeature
. We have batch
to put multiple lines of css directives at once, so it can be utilized here to employ prefixed directives with single function call.Empirically, this approach is working well since “opportunities where we absolutely need latest, cutting-edge css feature(s)” are relatively slim, so blanket auto-prefixing is not that must-have in reality.
Of course, YMMV, and your needs must vary from ours. Also this approach definitely requires case-by-case investigation on newer feature usages so takes extra time and effort.
via Planet Lisp by on Wed, 23 Sep 2020 18:15:19 GMT
Today I found that :read-timeout option of the Dexador does not work as expected and remembered about this small but useful library. It provides the only one macro which executes code and limits it's execution to a given number of seconds.
For illustration, I'll use https://httpbin.org This is a service which helps you to test HTTP libraries. If you didn't hear about it, I recommend to look at.
Let's retrieve an URL, which responds in 10 seconds. Even with :read-timeout option, dexador waits 10 seconds:
POFTHEDAY> (time
(nth-value 1
(dex:get "https://httpbin.org/delay/10"
:read-timeout 2)))
Evaluation took:
10.692 seconds of real time
200
If the site is not responding, a request may hang and block your application. Here is where trivial-timeout
comes to the rescue!
POFTHEDAY> (trivial-timeout:with-timeout (2)
(time
(nth-value 1
(dex:get "https://httpbin.org/delay/10"))))
Evaluation took:
2.003 seconds of real time
before it was aborted by a non-local transfer of control.
; Debugger entered on #<COM.METABANG.TRIVIAL-TIMEOUT:TIMEOUT-ERROR {10055B5373}>
Internally, this library generates the implementation-specific code to interrupt the code execution. Here how our example will look like for SBCL:
(let ((seconds 2))
(flet ((doti ()
(progn
(time (nth-value 1
(dexador:get "https://httpbin.org/delay/10"))))))
(cond
(seconds
(handler-case
(sb-ext:with-timeout seconds
(doti))
(sb-ext:timeout (com.metabang.trivial-timeout::c)
(declare (ignore com.metabang.trivial-timeout::c))
(error 'com.metabang.trivial-timeout:timeout-error))))
(t (doti)))))
And this is the same code, expanded on ClozureCL:
(let ((seconds 2))
(flet ((doit nil
(progn (time (nth-value 1
(dexador:get "https://httpbin.org/delay/10"))))))
(cond (seconds
(let* ((semaphore (ccl:make-semaphore))
(result)
(process
(ccl:process-run-function
"Timed Process process"
(lambda nil
(setf result
(multiple-value-list (doit)))
(ccl:signal-semaphore semaphore)))))
(cond ((ccl:timed-wait-on-semaphore
semaphore
seconds)
(values-list result))
(t
(ccl:process-kill process)
(error 'com.metabang.trivial-timeout:timeout-error)))))
(t (doit)))))
Don't know if such running the code in the separate thread can have some side-effects. At least, library's README says that it might be dangerous :)))
via Elm - Latest posts by @rtfeldman Richard Feldman on Wed, 23 Sep 2020 17:32:13 GMT
don’t quite understand how something like webpack+postcss could add prefixes to dynamically generated css
At the time, elm-css
had a CLI which generated static stylesheet files. A few years ago we changed it to the current runtime generation design, and that’s what it’s been ever since!
via Elm - Latest posts by @hexedhash Giorgio on Wed, 23 Sep 2020 15:51:38 GMT
Hello!
I’m curious to learn how folks who use elm-css
are adding vendor prefixes to their css.
I saw this [seemingly outdated] github thread but don’t quite understand how something like webpack+postcss could add prefixes to dynamically generated css.
via Elm - Latest posts by @rupert Rupert Smith on Wed, 23 Sep 2020 12:02:03 GMT
Anyway, I think I have now got this working now how I want it to. I have played around with various ideas, and the main insight I have had is that the recovery tactic should not be associated with tokens. The aim is not to error correct individual tokens, but simply to get the parser back to a place where it can continue, whilst taking note of the problem that did occur.
Originally, I had the idea of passing down the error handler on each Parser
building block, in a similar way to how inContext
works in Parser.Advanced
. So I had:
type Parser context problem value
= Parser
(RecoveryTactic problem
->
{ pa : PA.Parser context problem (Outcome context problem value)
, onError : RecoveryTactic problem
}
)
The problem with this, is that the recovery tactic would often be used in the wrong situation. If parsing a list of integers with a contribution from say:
[1sfd, 2, 3, 4]
That would fail on parsing 1sdf
as an Int
, but if recovering by skipping ahead to ,
then re-trying the int parser, that is also going to fail because there is whitespace after the comma, not an int. The recovery tactic needs to be put accross a larger piece of the parser, which will be re-tried in its entirety:
(PR.succeed identity
|> PR.ignore PR.spaces
|> PR.keep (PR.int ExpectingInt InvalidNumber)
|> PR.ignore PR.spaces
|> PR.ignore (PR.symbol "," ExpectingComma)
)
|> PR.forwardThenRetry [ ',' ] ExpectingComma Recovered
So I was able to get rid of the complicated context passing error handling mechanism, and just have the Parser
type like this:
type alias Parser context problem value =
PA.Parser context problem (Outcome context problem value)
The recovery tactic described in various papers is to first back up to a known start symbol, then scan ahead to a sentinal symbol, and try to continue after that. This is implemted as:
https://github.com/the-sett/parser-recoverable/blob/master/src/Parser/Recoverable.elm#L608
And can be summarised by this pseudo-code:
forwardThenRetry parser =
loop, starting with empty warning list []
oneOf [ try backtrackable parser
|> ensure any warnings from previous loop iterations are kept
, scan ahead for a sentinal token
|> if no characters consumed then
fail
else
try again
|> adding a warning about what was skipped over
]
Some tidying up and documentation and I will put it out as a new package.
via Elm - Latest posts by @rupert Rupert Smith on Wed, 23 Sep 2020 11:39:45 GMT
I found some time at the weekend to look through these. The parsing papers are not really much use when it comes to Elm, as they are all about LR parsers - tree sitter is presumably LR parser based. The Elm parser is a recursive descent parser and so is for LL grammars.
This is why it has backtrackable
for situations where you need to parse ahead, realize you are on the wrong branch and then back-up and try a different one. For example in C, you might have a function signature or implementation:
// Signature
void f(int, const char*);
// Implementation
void f(int, const char*)
{
}
You don’t know which you are getting until you hit the ;
or {
. With an LL parser you either re-design the grammar to be more LL friendly (that is make the void f(int, const char*)
bit a production of the grammar that forms part of a sig or implementation), or look ahead.
Googling for “error recovery in LL parser” or “error recovery in recursive descent parser” does turn up some relevant stuff.
From the tree sitter stuff though, this paper is quite fascinating on the topic of incremental tooling, https://www2.eecs.berkeley.edu/Pubs/TechRpts/1997/CSD-97-946.pdf. One thing to note is that written over 20 years ago, computers were a lot slower, I think that a simple chunking strategy might be good enough to keep the problem small enough with the amount of grunt we have at our disposal these days. Also, there seems to be an obsession with reaching the milestone having things fast enough to re-parse on every key stroke. A significant achievement, but I think its not really necessary as most editors implement a delay before offering auto suggests anyway, so fine to do it a few hundred ms after the last keystroke.
via Elm - Latest posts by @system system on Tue, 22 Sep 2020 19:31:53 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @evelios Tommy Waters on Tue, 22 Sep 2020 17:54:58 GMT
I am looking to create a newspaper column style format in my app that is using elm-ui. I have been able to figure out a way to create it, but it is not without complications. First off I found that there was already some css properties that control this behavior, mainly column-count
.
This works great if you are only using text for the periodical. Here is an example of this in action.
https://ellie-app.com/b3PBXCNL94qa1
textColumn
[ Element.htmlAttribute
<| Html.Attributes.style "column-count" "2"
, Element.htmlAttribute
<| Html.Attributes.style "display" "block"
-- Alternatively "inline-block"
]
(...) -- Paragraphs and text elements
However, when you have more than Element.paragraph
and Element.text
in there, things start to go awry. Changing from using display = flex
to display = block
has consiquences on any other elements that may be in your content body. It looks like elm-ui makes heavy use of the flex property in for it’s styling
The picutres show what is happening to me because column []
blocks are getting messed up.
Bottom of the first column
Top of the second column
Unfortunately, the best I was able to recreate this issue was with this ellie which doesn’t really show the same problems that I had in my project. This does somewhat capture a problem state if you are willing to horizontally resize the browser carefully, you can see states where the M is shifted down into the text.
https://ellie-app.com/b3Qg7dmWVJPa1
The last thing I’ll say is that I was looking to create more of a book experience where the user would only see the content that is listed on that page, then when they scroll they would get the continuation of the content. If I was to use only basic formatting and stick with what I had in my ellies, the user would have to read the first column all the way down to the bottom and then pick up back up at the top of the page. I haven’t been able to come up with an approach that would break up the content into one viewport worth of information. If there was a way of figuring out that information, then I could do this only in elm-ui and then just use groups of columns that I create myself.
Please let me know what your thoughts are on all of this and if you know of anything I could try to accomplish any of this. It seems like I am stretching the limits of both elm-ui and css in general on this one by adding in the page-break functionality.
via Elm - Latest posts by @system system on Mon, 21 Sep 2020 21:32:14 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Planet Lisp by on Mon, 21 Sep 2020 19:23:46 GMT
This is a simple library which allows to define global variables and save/restore their state to some persistent storage.
For example, we can define variables for database host and password:
;; In real application you should define these
;; variables in the lisp file:
POFTHEDAY> (persistent-variables:defpvar *password*)
POFTHEDAY> (persistent-variables:defpvar *db-host*)
;; Then in the REPL you can setup the app
POFTHEDAY> (setf *password* "Some $ecret")
POFTHEDAY> (setf *db-host* "some-host.internal-to.my-company.com")
;; And save it's state:
POFTHEDAY> (with-open-file (stream "/tmp/app.config"
:if-does-not-exist :create
:if-exists :supersede
:direction :output)
(persistent-variables:pv-save stream))
;; At startup your app might restore values for these variables:
POFTHEDAY> (with-open-file (stream "/tmp/app.config"
:direction :input)
(persistent-variables:pv-load stream))
What this system does - it saves all symbols, defined with defpvar
into the hash-table. And pv-save/pv-load
serializes and deserializes them as sexps:
POFTHEDAY> (rutils:print-ht persistent-variables::*persisted*)
#{
:DEFAULT '(*DB-HOST* *PASSWORD*)
}
POFTHEDAY> (with-output-to-string (s)
(persistent-variables:pv-save s))
"(\"POFTHEDAY\" \"*DB-HOST*\" \"\\\"some-host.internal-to.my-company.com\\\"\")
(\"POFTHEDAY\" \"*PASSWORD*\" \"\\\"Some $ecret\\\"\")
"
This library can be useful for interactive applications where user can change the settings and they should be restored on restart. You probably also be interested in ubiquitous library which I didn't review yet.