Feed Aggregator Page 675
Rendered on Thu, 25 Nov 2021 18:33:57 GMT
Rendered on Thu, 25 Nov 2021 18:33:57 GMT
via Elm - Latest posts by @rupert Rupert Smith on Thu, 25 Nov 2021 12:08:53 GMT
I think we are generally on the same page on this.
One complication and part of the reason for my viewpoint, is that currently we cannot send PRs to the core repos because the BDFL is not engaging with this (and has given his reasons). So a company maintaining Elm would necessarily have to create its own package repo system. For example, supermario mentioned that Lamdera could probably already take on that role, since he pretty much had to solve that already.
I am not against business paying for fixes, or having employees or consultants with commiter rights on the open source projects they consume. That is very often the case with Apache projects for example - many AWS services are obviously bundled and branded Apache projects. Another example of a company that consumes many of them is RedHat. They definitely have contributors on the projects they are interested in, but the governance model of Apache does not allow them to exclusively capture a project.
I think if you work as an Elm consultant or a web-dev business, it can be a good selling point if you can tell your customers that you are involved in core maintenance, have their back, can get critical fixes in if the need arises. Selling that as a service might be a good source of occasional side revenue. Or it might get you onto a project where you are expected to play that role if the need arises.
But I still think we would need the neutral, democratic, collectively run ownership of the core being maintained sitting outside of any particular business operation.
via Elm - Latest posts by @rupert Rupert Smith on Thu, 25 Nov 2021 11:52:58 GMT
Eco - Can stand for Elm Compiler Offline. That is, its the elm compiler with the built in packaging stuff removed into a separate tool (eco-install). Also about opening up the Elm ecosystem.
via Elm - Latest posts by @gampleman Jakub Hampl on Thu, 25 Nov 2021 10:55:05 GMT
Surely the natural name would be Elf Could be even a backronym for Elm Language Fork
via Planet Lisp by on Thu, 25 Nov 2021 10:39:18 GMT
I maintain a web application written in Common Lisp, used by real world© clients© (incredible I know), and I finally got to finish two little additions:
The HTML cleanup part is about how to use LQuery for
the task. Its doc shows the remove
function from the beginning, but I have had difficulty to find how to use it. Here’s how. (see issue #11)
https://shinmera.github.io/lquery/
LQuery has remove
, remove-attr
, remove-class
, remove-data
. It seems pretty capable.
Let’s say I got some HTML and I parsed it with LQuery. There are two buttons I would like to remove (you know, the “read more” and “close” buttons that are inside the book summary):
(lquery:$ *node* ".description" (serialize))
;; HTML content...
<button type=\"button\" class=\"description-btn js-descriptionOpen\"><span class=\"mr-005\">Lire la suite</span><i class=\"far fa-chevron-down\" aria-hidden=\"true\"></i></button>
<button type=\"button\" class=\"description-btn js-descriptionClose\"><span class=\"mr-005\">Fermer</span><i class=\"far fa-chevron-up\" aria-hidden=\"true\"></i></button></p>")
On GitHub, @shinmera tells us we can simply do:
($ *node* ".description" (remove "button") (serialize))
Unfortunately, I try and I still see the two buttons in the node or in the output. What worked for me is the following:
(lquery:$ *NODE* ".description button" (serialize))
;; => output
remove
. This returns the removed elements on the REPL, but they are corrcetly removed from the node (a global var passed as parameter):(lquery:$ *NODE* ".description button" (remove) (serialize))
;; #("<button type=\"button\" class=\"description-btn js-descriptionOpen\"><span class=\"mr-005\">Lire la suite</span><i class=\"far fa-chevron-down\" aria-hidden=\"true\"></i></button>"
Now if I check the description field:
(lquery:$ *NODE* ".description" (serialize))
;; ...
;; </p>")
I have no more buttons \o/
Now to pagination.
This is my 2c, hopefully this will help someone do the same thing quicker, and hopefully we’ll abstract this in a library...
On my web app I display a list of products (books). We have a search box with a select input in order to filter by shelf (category). If no shelf was chosen, we displayed only the last 200 most recent books. No need of pagination, yet... There were only a few thousand books in total, so we could show a shelf entirely, it was a few hundred books by shelf maximum. But the bookshops grow and my app crashed once (thanks, Sentry and cl-sentry). Here’s how I added pagination. You can find the code here and the Djula template there.
The goal is to get this and if possible, in a re-usable way:
I simply create a dict object with required data:
(defun make-pagination (&key (page 1) (nb-elements 0) (page-size 200)
(max-nb-buttons 5))
"From a current page number, a total number of elements, a page size,
return a dict with all of that, and the total number of pages.
Example:
(get-pagination :nb-elements 1001)
;; =>
(dict
:PAGE 1
:NB-ELEMENTS 1001
:PAGE-SIZE 200
:NB-PAGES 6
:TEXT-LABEL \"Page 1 / 6\"
)
"
(let* ((nb-pages (get-nb-pages nb-elements page-size))
(max-nb-buttons (min nb-pages max-nb-buttons)))
(serapeum:dict :page page
:nb-elements nb-elements
:page-size page-size
:nb-pages nb-pages
:max-nb-buttons max-nb-buttons
:text-label (format nil "Page ~a / ~a" page nb-pages))))
(defun get-nb-pages (length page-size)
"Given a total number of elements and a page size, compute how many pages fit in there.
(if there's a remainder, add 1 page)"
(multiple-value-bind (nb-pages remainder)
(floor length page-size)
(if (plusp remainder)
(1+ nb-pages)
nb-pages)))
#+(or)
(assert (and (= 30 (get-nb-pages 6000 200))
(= 31 (get-nb-pages 6003 200))
(= 1 (get-nb-pages 1 200))))
You call it:
(make-pagination :page page
:page-size *page-length*
:nb-elements (length results))
then pass it to your template, which can {% include %}
the template
given above, which will create the buttons (we use Bulma CSS there).
When you click a button, the new page number is given as a GET parameter. You must catch it in your route definition, for example:
(easy-routes:defroute search-route ("/search" :method :get) (q shelf page)
...)
Finally, I updated my web app (while it runs, it’s more fun and why shut it down? It’s been 2 years I do this and so far all goes well (I try to not upgrade the Quicklisp dist though, it went badly once, because of external, system-wide dependencies)) (see this demo-web-live-reload).
That’s exactly the sort of things that should be extracted in a library, so we can focus on our application, not on trivial things. I started that work, but I’ll spend more time next time I need it... call it “needs driven development”.
Happy lisping.
via Elm - Latest posts by @system system on Thu, 25 Nov 2021 10:33:30 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @Janiczek Martin Janiczek on Thu, 25 Nov 2021 09:20:29 GMT
I believe the need to differentiate is in name and logo.
Let’s say the fork is named Frob (horrible, I know). In my mind the Frob webpage could mention that it’s based on Elm 0.19 or that it’s a fork of Elm 0.19 with focus on X, Y, Z…
via Elm - Latest posts by @nil on Thu, 25 Nov 2021 09:08:34 GMT
There are sooo few bugs in the excel doc linked above here for a language that is in pre-release where the latest release has been out for almost 2 years. It would be nice to have an effort to squash them (or at least try, and if its complicated defer it.). Even a “go” from a maintainer that “PRs for these are welcome, and we will review them in N weeks” would be a nice solution.
via Elm - Latest posts by @Maldus512 Mattia Maldini on Thu, 25 Nov 2021 09:02:02 GMT
Remember that the name should be different. I’m not sure how much of marketing benefit would be left while having to create an entirely different brand.
via Elm - Latest posts by @curiousme Mr. Curious on Thu, 25 Nov 2021 08:47:32 GMT
Here’s a chunk of code I use for dropping files and whole folders.
Javascript part that scans the folder has to deal with large folders as browsers apparently handle only 100 files at a time. Code also ignores 0-length files.
var activeAsyncCalls = 0
var filesRemaining = 0
function scanDirectory(directory, path, onComplete) {
let dirReader = directory.createReader();
let container = { name: path, files: [], dirs: [] }
let errorHandler = error => {
activeAsyncCalls--;
}
var readEntries = () => {
activeAsyncCalls++
dirReader.readEntries(entries => {
if (entries.length > 0 && filesRemaining > 0) {
for (let entry of entries) {
if (entry.name.substring(0, 1) != '.') {
if (entry.isFile && filesRemaining > 0) {
activeAsyncCalls++
entry.file(file => {
if (filesRemaining > 0 && file.size > 0) {
container.files.push(file);
filesRemaining--
}
activeAsyncCalls--
});
} else if (entry.isDirectory) {
container.dirs.push(scanDirectory(entry, `${path}/${entry.name}`, onComplete));
}
}
}
// Recursively call readEntries() again, since browsers only handle
// the first 100 entries.
// See: https://developer.mozilla.org/en-US/docs/Web/API/DirectoryReader#readEntries
readEntries();
}
activeAsyncCalls--
if (activeAsyncCalls == 0) {
onComplete()
}
}, errorHandler);
};
readEntries();
return container;
}
function scanDropped(folderId, items, onComplete) {
var container = { name: folderId, files: [], dirs: [] };
for (let item of items) {
var entry;
if ((item.webkitGetAsEntry != null) && (entry = item.webkitGetAsEntry())) {
if (entry.isFile && filesRemaining > 0) {
container.files.push(item.getAsFile());
filesRemaining--
} else if (entry.isDirectory) {
container.dirs.push(scanDirectory(entry, entry.name, onComplete));
}
} else if (item.getAsFile != null) {
if ((item.kind == null) || (item.kind === "file")) {
container.files.push(item.getAsFile());
filesRemaining--
}
}
if (filesRemaining <= 0) break
}
return container;
}
function readChunk(file, start, end, callback) {
var blob = file.slice(start, end);
var reader = new FileReader();
reader.onloadend = function () {
callback(reader.error, reader.result);
}
reader.readAsArrayBuffer(blob);
}
You also need to set up ports. The drop event comes from Elm side (scanTree
), and js then returns the result of the scan through fileTree
port.
if (elm.ports.scanTree && elm.ports.fileTree) {
elm.ports.scanTree.subscribe(function ({ e, maxFiles, folderId }) {
if (e && e.dataTransfer) {
// I forgot what this limit of 50 is for. I think the application stipulates the ability to limit the
// number of files that can be uploaded at once for certain users, and 50 is just an arbitrary default
filesRemaining = maxFiles || 50
activeAsyncCalls = 0
let items = e.dataTransfer.items;
let sent = false
var container
let onComplete = () => {
if (!sent && elm.ports.fileTree) {
elm.ports.fileTree.send(container)
sent = true
}
}
container = scanDropped(folderId, items, onComplete);
if (activeAsyncCalls == 0 || filesRemaining <= 0) {
onComplete()
}
// Backup in case we had a bug and undercounted activeAsyncCalls;
// also, send a temporary result if the scan is taking too long
setTimeout(() => {
if (!sent && elm.ports.fileTree) {
elm.ports.fileTree.send(container)
}
}, 400)
} else {
if (console && console.error) {
if (!e)
console.error("e is null!");
else
console.error("e.dataTransfer is null!");
}
}
})
}
Finally, relevant parts on Elm side:
port scanTree : { e : D.Value, maxFiles : Int, folderId : String } -> Cmd msg
port fileTree : (D.Value -> msg) -> Sub msg
...
-- update
FilesDropped v ->
case model.folderId of
Just f ->
( { model | hover = False }, scanTree { e = v, maxFiles = model.maxFiles, folderId = f.id }, NoAction)
GotDroppedFiles ((Dir folderId _ _) as dir) ->
let
unroll (Dir _ files dirs) =
files ++ List.concatMap unroll dirs
newBatch =
unroll dir |> dedupe folderId
newlist =
model.files ++ newBatch
newState =
case ( model.queueState, List.length newlist ) of
( _, 0 ) ->
model.queueState
( Finished, _ ) ->
NotStarted
_ ->
model.queueState
-- start new queue only if not already busy
cmd =
case model.activeFile of
Nothing ->
if List.length newBatch > 0 then
startQueue model
else
Cmd.none
_ ->
Cmd.none
newAction =
case List.length newBatch of
0 ->
NoEvent
_ ->
NewFilesInQueue
totalSize =
(List.map .size newlist |> List.sum) + Maybe.withDefault 0 (Maybe.map .size model.activeFile) + (List.map .size model.processedFiles |> List.sum)
in
( { model | files = numberizeQueue model newlist, queueState = newState, queueSize = totalSize }
, cmd
, newAction
)
-- subscriptions
subscriptions : Model -> Sub Msg
subscriptions _ =
fileTree
(\v ->
case D.decodeValue directoryDecoder v of
Ok d ->
GotDroppedFiles d
Err e ->
DropError (D.errorToString e)
)
-- DECODERS
directoryDecoder : D.Decoder FileTree
directoryDecoder =
D.map3 Dir
(D.field "name" D.string)
(D.field "files" (D.list fileInfoWithValue))
(D.field "dirs" (D.list (D.lazy (\_ -> directoryDecoder))))
fileInfoWithValue : Decoder ( File, D.Value )
fileInfoWithValue =
-- We need the File value (for FileInfo), but still keep the raw value (for chunked decoding)
D.map2 Tuple.pair File.decoder D.value
Apologies for verbosity. As you can see, it’s a cut from a larger chunk of code that also handles the upload queue, etc. I’m sure you’ll be able to simplify to get what you want.
via Elm - Latest posts by @system system on Thu, 25 Nov 2021 07:23:32 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @lucamug on Thu, 25 Nov 2021 00:11:27 GMT
An “Elm 1.0, Corporate Edition” could be useful for marketing reasons, to facilitate adoption.
Other than that, for the way we use Elm at work, I don’t have any strong complaints/requirements about the present Elm implementation.
Edit: I was correctly reminded that, if this is a fork, the name need to be different from “Elm”
via Elm - Latest posts by @albertdahlin Albert Dahlin on Wed, 24 Nov 2021 22:06:04 GMT
at ["type", "name"] string
|> andThen decodeRecordTypeDetails
No, it will not. It would operate on the root object. Here is an ellie that demonstrates that it works.
via Elm - Latest posts by @jaruji on Wed, 24 Nov 2021 21:42:41 GMT
I’ll definitely try something alongs these lines, thank you both for your help
via Elm - Latest posts by @system system on Wed, 24 Nov 2021 21:26:46 GMT
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.
via Elm - Latest posts by @ben-t on Wed, 24 Nov 2021 19:38:48 GMT
Thanks, @wolfadex
Am I right in thinking this is essentially the solution going via an intermediate data structure?
map5 ...
to get to the intermediate data structure, andThen \(lambda...)
to get from the intermediate to final data structure?
via Elm - Latest posts by @hasko Hasko on Wed, 24 Nov 2021 19:36:40 GMT
Thanks for the quick response. I’m aware of the progress feature but unfortunately the response size is pretty unpredictable. Sigh, I guess I’ll have to resort to JS like you suggested.
via Elm - Latest posts by @wolfadex Wolfgang Schuster on Wed, 24 Nov 2021 19:08:55 GMT
This is a really challenging data structure you have here. You’re right that andThen
won’t work. My current thought is to have multiple decoders that attempt to decode each of the meta data’s and then Json.Decode.andThen
both the type information and the collection of potential metadata fields and see if the metadata for the type was found. Something like
{
"id": 2,
"timestamp": "etc",
"type": {"id": 4, "name": "bed"},
"bed": {
// (Bed metadata goes here)
}
}
decodeData =
map5 (\id timestamp type_ maybeBed maybeChair ->
( Data I'd timestamp, type_, { bed = maybeBed, chair = maybeChair } )
)
(field "id" string)
(field "timestamp" string)
(at [ "type", "name" ] string)
(maybe (field "bed" bedMetaDecoder))
(maybe (field "chair" chairMetaDecoder))
|> andThen
(\( dataFn, type_, rec ) ->
case type_ of
"chair" ->
case rec.chair of
Just chairMeta -> success (dataFn chairMeta)
Nothing -> fail "Expected 'chair' metadata"
"bed" ->
case rec.bed of
...
_ -> fail "unknown type"
)
via Elm - Latest posts by @DullBananas on Wed, 24 Nov 2021 18:59:50 GMT
Elm’s Http module does not allow that, but it does let you show a progress bar if your backend can predict the size of the response.
To access data from a request while it’s being received, you need to do the request in JavaScript and use ports. Then use or create a JSON parser that works with streaming data.
If the backend can quickly access a node when given an ID, then there’s another solution: make the backend send only the node IDs in the first response, then the frontend can make one request for each node to get the nodes’ data.
via Elm - Latest posts by @hasko Hasko on Wed, 24 Nov 2021 17:24:32 GMT
So, I have a backend (Python flask) that traverses a large graph to create the transitive hull starting from a specific node, i.e. all reachable nodes from that node. It then wraps the result in Json and sends it off to an Elm front end.
Now I’m thinking to use flask’s streaming pattern to send parts of the list of nodes already while it’s being discovered, to provide early feedback to the user.
Is there a way to partially decode Json in Elm and treat it as a stream, e.g. through a subscription?
via Elm - Latest posts by @ben-t on Wed, 24 Nov 2021 16:22:41 GMT
Thanks @wondible
I might be misunderstanding the behaviour of at
. With this solution, wouldn’t the decoder returned by decodeRecordTypeDetails
operate on the actual name of the type?
So although the types line up, I don’t think this would work because we need to apply that decoder to a field in the original outer JSON.
I tried something which I think was similar and got the following result:
Got bad body (Problem with the value at json[0].type:
{
"id": 35,
"name": "Chair"
}
Expecting an OBJECT with a field named `type`) when attempting to load JSON!
Have I misunderstood?