Feed Aggregator Page 647
Rendered on Thu, 17 Sep 2020 16:33:36 GMT
Rendered on Thu, 17 Sep 2020 16:33:36 GMT
via Planet Lisp by on Wed, 16 Sep 2020 19:50:21 GMT
This library is a port of Django templates. Its coolest feature are:
Also, there is nice documentation. In presence of documentation, I won't provide many examples. Instead, let's implement a small function for our HTML templating engines performance test.
I didn't find the way to load a template from the string. That is why we need to set up the library and let it know where to search template files:
POFTHEDAY> djula:*current-store*
#<DJULA:FILE-STORE {100248A8C3}>
POFTHEDAY> (djula:find-template djula:*current-store*
"test.html")
; Debugger entered on #<SIMPLE-ERROR "Template ~A not found" {1003D5F073}>
[1] POFTHEDAY>
; Evaluation aborted on #<SIMPLE-ERROR "Template ~A not found" {1003D5F073}>
POFTHEDAY> (djula:add-template-directory "templates/")
("templates/")
Now we need to write such template to the templates/test.html
:
<h1>{{ title }}</h1>
<ul>
{% for item in items %}
<li>{{ item }}</li>
{% endfor %}
</ul>
And we can test it:
POFTHEDAY> (djula:find-template djula:*current-store*
"test.html")
#P"/Users/art/projects/lisp/lisp-project-of-the-day/templates/test.html"
(defparameter +welcome.html+ (djula:compile-template* "welcome.html"))
POFTHEDAY> (with-output-to-string (s)
(djula:render-template* (djula:compile-template* "test.html")
s
:title "Foo Bar"
:items '("One" "Two" "Three")))
"<h1>Foo Bar</h1>
<ul>
<li>One</li>
<li>Two</li>
<li>Three</li>
</ul>
"
It is time to measure performance:
;; We need this to turn off autoreloading
;; and get good performance:
POFTHEDAY> (pushnew :djula-prod *features*)
POFTHEDAY> (defparameter *template*
(djula:compile-template* "test.html"))
POFTHEDAY> (defun render (title items)
(with-output-to-string (s)
(djula:render-template* *template*
s
:title title
:items items)))
POFTHEDAY> (time
(loop repeat 1000000
do (render "Foo Bar"
'("One" "Two" "Three"))))
Evaluation took:
4.479 seconds of real time
4.487983 seconds of total run time (4.453540 user, 0.034443 system)
[ Run times consist of 0.183 seconds GC time, and 4.305 seconds non-GC time. ]
100.20% CPU
9,891,631,814 processor cycles
1,392,011,008 bytes consed
Pay attention to the line adding :djula-prod
to the *features*
. It disables auto-reloading. Withf enabled auto-reloading rendering is 2 times slower and takes 10.6 microseconds.
I could recommend Djula
to everybody who works in a team where HTML designers are writing templates and don't want to dive into Lisp editing.
With Djula
they will be able to easily fix templates and see results without changing the backend's code.
Also, today I've decided to create a base-line function which will create HTML using string concatenation as fast as possible. This way we'll be able to compare different HTML templating engines with the hand-written code:
POFTHEDAY> (defun render-concat (title items)
"This function does not do proper HTML escaping."
(flet ((to-string (value)
(format nil "~A" value)))
(apply #'concatenate
'string
(append (list
"<title>"
(to-string title)
"</title>"
"<ul>")
(loop for item in items
collect "<li>"
collect (to-string item)
collect "</li>")
(list
"</ul>")))))
POFTHEDAY> (render-concat "Foo Bar"
'("One" "Two" "Three"))
"<title>Foo Bar</title><ul><li>One</li><li>Two</li><li>Three</li></ul>"
POFTHEDAY> (time
(loop repeat 1000000
do (render-concat "Foo Bar"
'("One" "Two" "Three"))))
Evaluation took:
0.930 seconds of real time
0.938568 seconds of total run time (0.919507 user, 0.019061 system)
[ Run times consist of 0.114 seconds GC time, and 0.825 seconds non-GC time. ]
100.97% CPU
2,053,743,332 processor cycles
864,022,384 bytes consed
Writing to stream a little bit slower, so we'll take as a base-line the result from render-concat
:
POFTHEDAY> (defun render-stream (title items)
"This function does not do proper HTML escaping."
(flet ((to-string (value)
(format nil "~A" value)))
(with-output-to-string (out)
(write-string "<title>" out)
(write-string (to-string title) out)
(write-string "</title><ul>" out)
(loop for item in items
do (write-string "<li>" out)
(write-string (to-string item) out)
(write-string "</li>" out))
(write-string "</ul>" out))))
WARNING: redefining POFTHEDAY::RENDER-STREAM in DEFUN
RENDER-STREAM
POFTHEDAY> (time
(loop repeat 1000000
do (render-stream "Foo Bar"
'("One" "Two" "Three"))))
Evaluation took:
1.208 seconds of real time
1.214637 seconds of total run time (1.196847 user, 0.017790 system)
[ Run times consist of 0.102 seconds GC time, and 1.113 seconds non-GC time. ]
100.58% CPU
2,667,477,282 processor cycles
863,981,472 bytes consed
By, the way, I tried to use str:replace-all
for escaping <
and >
symbols in the handwritten version of the render-concat
function. But its performance degraded dramatically and became 36 microseconds.
str:replace-all
uses cl-ppcre for text replacement.
What should I use instead?
via Elm - Latest posts by @kraklin Tomáš Látal on Wed, 16 Sep 2020 17:55:09 GMT
I would say so. The extension is injecting the page after it has been load so I’d say there can be delay between you start sending Debug.log straight away and console.log being hooked up. As oppose to the NPM package which is being initialized before Elm even starts. I’m new to the whole extension space, so maybe it is somehow possible, but I’m afraid that there still be some delay between page being loaded and extension kicks in.
via Elm - Latest posts by @ymtszw Yu Matsuzawa on Wed, 16 Sep 2020 17:35:03 GMT
It looks working fine.
However when Debug.log is called right after the Elm page rendering (around init function, for instance), that first debug print is not handled by the extension (shown in default style).
After the page load settled, subsequent debug prints are correctly handled by the extension and formatted. Is it intrinsic limitation of browser extension? By that I mean, browser extensions are lazily loaded after the page load so they cannot intervene events that happened before they are loaded?
via Elm - Latest posts by @kraklin Tomáš Látal on Wed, 16 Sep 2020 16:38:43 GMT
So first batch of beta versions has been sent out, looking forward to hear your feedback and big thanks to you all
via Elm - Latest posts by @rupert Rupert Smith on Wed, 16 Sep 2020 14:36:00 GMT
That’s some hard-core reading material right there… thanks, I’ll give it a go.
I am wondering how much of this stuff is implementable on top of elm/parser
. If that does not expose the features needed, I’ll most likely just stick with what I can do with extending elm/parser
as I don’t really want to re-design the parser from scratch.
Also noting that elm/parser
is a kernel package - not sure if it really needs to be, or that is just for some speed ups? But it means I can’t just clone elm/parser
and make small modifications.
via Elm - Latest posts by @miniBill Leonardo Taglialegne on Wed, 16 Sep 2020 10:58:47 GMT
I’m considering it.
Especially because I finally managed to have the API I wanted all along:
https://ellie-app.com/9Z34Xrhwm7Xa1
You can tell this is the “correct” API because the types are finally symmetric and clean.
Nontrivial insights I had to find:
prettyPrinter
inside the UrlCodec
;prettyPrinter
inside variant
.The rest is basically all blindly solving the type puzzle.
via Elm - Latest posts by @razze Kolja Lampe on Wed, 16 Sep 2020 09:31:12 GMT
Here’s also some research, if your keen to get into the details https://tree-sitter.github.io/tree-sitter/#underlying-research
via Elm - Latest posts by @system system on Wed, 16 Sep 2020 09:01:24 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @Sebastian Sebastian on Wed, 16 Sep 2020 01:02:02 GMT
In elm-css you can do this using the Css.Global
module.
We use elm-css, but we prefer to do this with plain CSS. We have a global CSS file with:
.displayOnParentHover {
display: none;
}
*:hover > .displayOnParentHover {
display: block;
}
And then add class "displayOnParentHover"
to the element.
via Elm - Latest posts by @neurodynamic on Tue, 15 Sep 2020 23:04:18 GMT
Essentially, can this css be represented in either library:
.child {
display: none;
}
.parent:hover .child {
display: block;
}
I’ve used elm-ui
a lot, but haven’t used elm-css
at all, but in looking at the docs in both cases, it seems like hover rules can only be applied to the element you’re styling? I can’t figure out a way to refer to if a parent is being hovered over.
via Elm - Latest posts by @rupert Rupert Smith on Tue, 15 Sep 2020 19:58:22 GMT
Yes, having the editor insert the closing ]
, )
, }
will help.
My project does include a source editor, if it did not I would not bother trying to do error recovery on the parsing. Top-level chunking is enough to restart the parser and allows multiple errors to be reported per file. Imagine for example, if Elm did not do this and only ever gave max 1 error per file, you would not get such a good insight into how many errors you have left to fix, or to choose the order in which you fix them.
My source language Salix is a data modelling language, and the compiler can feed it to a configurable set of output code generators. Each code generator can accept or require additional ‘properties’ be added to the source. For example, if I model a 1:1 relationship between 2 records and am generating the DDL for a relational database, I need to decide which table will hold the foreign key. In that case, there might be a sql.fk
property to tag one of the record fields with. So beyond parsing, I want feedback from later stages of the compiler to the editor in order to populate auto completion lists with the available or required properties, and so on.
That is my motivation for exploring error recovery deeper within the AST, and not just top-level chunking. I want to succesfully parse the property list, even when it is not syntactically correct.
This illustrates the feedback loop:
Editor --- (Source) ---> Parser --- (AST) --> Later Compiler Stages
^--------------------Context Sensitive Help ------------|
This also leads me to thinking, do I need some kind of additional ‘hole’ in my AST? So where there is an error in the source, I could mark a hole that later stages of the compiler could provide auto completion suggestions for? Perhaps I don’t need to do this, as I am only interested in errors that occur where the cursor currently is, so I can figure out where in the AST the cursor is, and build the auto completion list based on that.
I think where an error occurs, and the parser chomps to recover, I need to capture the chomped string in the error (using getChompedString
). I possibly need to trim off the whitespace around that chomped string too. This string and its location will then allow the editor to know what to replace with suggestions from the auto completion list.
via Elm - Latest posts by @system system on Tue, 15 Sep 2020 14:43:50 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @joelq Joël Quenneville on Tue, 15 Sep 2020 13:59:49 GMT
Love the demo!
I’ve used this algorithm before on an elm plotting project but did the downsampling on the backend. Glad to see this coming to the Elm world!
via Elm - Latest posts by @klazuka Keith Lazuka on Mon, 14 Sep 2020 21:18:33 GMT
If your project also provides a source editor, then one thing you can do to mitigate the missing, closing token (e.g. ]
, )
, }
) would be to always insert a matching, closing token when the user types an opening token. This doesn’t guarantee that you’ll always have a closing token, but it will probably work most of the time.
via Elm - Latest posts by @rupert Rupert Smith on Mon, 14 Sep 2020 20:07:28 GMT
I can see recovery is often likely to consist of chomping until one of some list of characters is reached. Using Elm as an example, if we have:
val = [ 1, 2, 3, ]
val = [ 1, 2, 3xy, 4]
Then chomping until ','
or ']'
is hit is going to work.
Indeed, I can see in the TolerantParser
, this recovery strategy is already coded:
https://github.com/mdgriffith/elm-markup/blob/3.0.1/src/Mark/Internal/TolerantParser.elm#L129
type OnError error
= FastForwardTo (List Char)
| Skip
| StopWith error
But there is another common case, that should come up often during interactive editing. That is when the list has not been closed yet. Suppose I am typing:
val = [ 1, 2, 3,
^ cursor is here
I didn’t close the list yet, and chomping for the comma or end-of-list is not going to work. An editor could theoretically want to infer type on the list by recovering to get the list [1, 2, 3]
and then offer help relevant to a List Int
context.
I am thinking this could be done by always pushing the closure of the current AST structure onto the context. By which I mean, if say [
is encountered, the expected closure is ]
. Or ``(and
)or
letand
in` and so on.
When an error is encountered, in addition to chomping for certain characters, could also try popping the next closure off the context stack and acting as if it was encountered, and then trying to continue from there. If that fails, could rewind and try popping another one, and so on, until none are left.
val =
[ (1, "one"
, (2, "two")
]
This doesn’t seem like it will always produce good result. I forgot the )
on the first list entry, the parser won’t notice until it hits ]
, at which point it will pop the )
and parse the expression as [ (1, "one", (2, "two")) ]
.
I don’t see a function to pop the context stack, only to push onto it with inContext
, so I am not yet sure how this will work? I guess I can always push a new context onto it that is always interpreted as overriding the next one, so can modify the context in that way even without a pop operation.
via Planet Lisp by on Mon, 14 Sep 2020 18:24:57 GMT
Spinneret is a sexp based templating engine similar to cl-who, reviewed in post number #0075. Today we'll reimplement the snippets from the cl-who
post and I'll show you a few features I'm especially like in Spinneret.
The first example is very simple. It is almost identical to cl-who
, but more concise:
POFTHEDAY> (spinneret:with-html-string
(:body
(:p "Hello world!")))
"<body>
<p>Hello world!
</body>"
Next example in the cl-who
post showed, how to escape values properly to protect your site from JavaScript Injection attacks. With Spinneret
, you don't need this, because it always escapes the values.
But if you really need to inject the HTML or JS into the page, then you have to use raw
mode:
POFTHEDAY> (defclass user ()
((name :initarg :name
:reader get-name)))
POFTHEDAY> (let ((user (make-instance
'user
:name "Bob <script>alert('You are hacked')</script>")))
(spinneret:with-html-string
(:div :class "comment"
;; Here Spinneret protects you:
(:div :class "username"
(get-name user))
;; This way you can force RAW mode.
;; DON'T do this unless the value is from the
;; trusted source!
(:div :class "raw-user"
(:raw (get-name user))))))
"<div class=comment>
<div class=username>
Bob <script>alert('You are hacked')</script>
</div>
<div class=raw-user>Bob <script>alert('You are hacked')</script>
</div>
</div>"
With cl-who
you might misuse str
and esc
functions. But with Spinneret
there is less probability for such a mistake.
Another cool Spinneret's feature is its code walker. It allows mixing usual Common Lisp forms with HTML sexps. Compare this code snippet with the corresponding part from cl-who
post:
POFTHEDAY> (let ((list (list 1 2 3 4 5)))
(spinneret:with-html-string
(:ul
(loop for item in list
do (:li (format nil "Item number ~A"
item))))))
"<ul>
<li>Item number 1
<li>Item number 2
<li>Item number 3
<li>Item number 4
<li>Item number 5
</ul>"
We don't have to use wrappers like cl-who:htm
and cl-who:esc
here.
Finally, let's compare Spinneret's performance with Zenekindarl
, reviewed yesterday:
POFTHEDAY> (declaim (optimize (debug 1) (speed 3)))
POFTHEDAY> (defun render (title items)
(spinneret:with-html-string
(:h1 title
(:ul
(loop for item in items
do (:li item))))))
POFTHEDAY> (time
(loop repeat 1000000
do (render "Foo Bar"
'("One" "Two" "Three"))))
Evaluation took:
4.939 seconds of real time
4.950155 seconds of total run time (4.891959 user, 0.058196 system)
[ Run times consist of 0.078 seconds GC time, and 4.873 seconds non-GC time. ]
100.22% CPU
10,905,720,340 processor cycles
991,997,936 bytes consed
Sadly, but in this test Spinneret
3 times slower than Zenekindarl
and CL-WHO
. Probably that is because it conses more memory?
@ruricolist, do you have an idea why does Spinneret
3 times slower than CL-WHO
?
via Elm - Latest posts by @rupert Rupert Smith on Mon, 14 Sep 2020 14:46:13 GMT
I’ve been trying to collect my thoughts on this, feel free to chip in if anything chimes for you!
I think this is good advice, as recovery may need to understand the context of an error in order to be able to succesfully recover. If error reporting is already precise and giving good context, the code will likely be in a good place to implement recoveries. As I figure out the recovery strategies, I may need to re-visit context and adjust it better to that purpose.
Parsing is quite hard, so is doing good errors, so is doing error recovery. It seems prudent to not try and tackle all this complexity in a single pass. Fortunately, the way elm/parser
is structured supports this well, as does Matt’s TolerantParser
, since Parser
is simpler than Parser.Advanced
is simpler that TolerantParser
. So…
Write a Parser that accepts the input, does not matter that error reporting is not so great. Just get the shape of the parser right.
Switch up to Parser.Advanced
using type alias Parser = Parser.Advanced.Parser Never Parser.Problem a
to get started. Then start designing the Context
paying particular attention to areas where a recovery could be possible.
Look at error recovery.
Since I chunked ahead of using a Parser
, rather than trying to parse all in a single pass and recover to the start of chunks, each Parser
will not be starting from true line 1. So I at least need to pass the start line around to add to the row
.
I could do this at the end, if I am only marking positions in error, by post-processing the list of DeadEnd
s. In my case, I want to record source positions all through the AST on succesful parsings too, so that errors in later stages of the compiler pipe-line can also tie back to the source accurately. (I could also post process the AST to add in the start line, but why make an extra pass if its not really needed).
At first I was trying to is Parser.Advanced.inContext
to carry this global context around. Then I realised there is no getContext
function, so how do I get the info back deeper in the parser? So I now think of the Parser
context as a local context, and a seperate global context can just be passed around as function parameters:
type alias TopLevelContext =
{ startLine : Int
, ...
}
someParser : TopLevelContext -> Parser AST
someParser global =
...
|= someChildParser global
someChildParser : TopLevelContext -> Parser AST
...
via Elm - Latest posts by @ymtszw Yu Matsuzawa on Mon, 14 Sep 2020 13:33:40 GMT
Very nice! Registered right away.
This was precisely the situation, and optional browser extension is definitely a nicer way to get the feature!
via Elm - Latest posts by @robin.heggelund Robin Heggelund Hansen on Mon, 14 Sep 2020 10:42:52 GMT
Shortly before the summer break I released a game, Elm Warrior, that can be used for improving your understanding of Elm after a brief tutorial. I’ve since received requests about a getting started guide for this game.
Here it is: https://dev.to/skinney/getting-started-with-elm-warrior-5b2n
I hope this can be of use for meetups and workshops, or just for personal gratification
via Elm - Latest posts by @RalfNorthman Ralf Northman on Mon, 14 Sep 2020 08:18:52 GMT
Initial release of elm-lttb, a package for efficient downsampling of data for plotting purposes. Sveinn Steinarssons algorithm selects points to emphasize the peaks and valleys of the original data.