In a realistic web app, we want to show different content for different URLs:
How do we do that? We use the
elm/url to parse the raw strings into nice Elm data structures. This package makes the most sense when you just look at examples, so that is what we will do!
Say we have an art website where the following addresses should be valid:
So we have topic pages, blog posts, user information, and a way to look up individual user comments. We would use the
Url.Parser module to write a URL parser like this:
import Url.Parser exposing (Parser, (</>), int, map, oneOf, s, string) type Route = Topic String | Blog Int | User String | Comment String Int routeParser : Parser (Route -> a) a routeParser = oneOf [ map Topic (s "topic" </> string) , map Blog (s "blog" </> int) , map User (s "user" </> string) , map Comment (s "user" </> string </> s "comment" </> int) ] -- /topic/pottery ==> Just (Topic "pottery") -- /topic/collage ==> Just (Topic "collage") -- /topic/ ==> Nothing -- /blog/42 ==> Just (Blog 42) -- /blog/123 ==> Just (Blog 123) -- /blog/mosaic ==> Nothing -- /user/tom/ ==> Just (User "tom") -- /user/sue/ ==> Just (User "sue") -- /user/bob/comment/42 ==> Just (Comment "bob" 42) -- /user/sam/comment/35 ==> Just (Comment "sam" 35) -- /user/sam/comment/ ==> Nothing -- /user/ ==> Nothing
Url.Parser module makes it quite concise to fully turn valid URLs into nice Elm data!
Now say we have a personal blog where addresses like this are valid:
In this case we have individual blog posts and a blog overview with an optional query parameter. We need to add the
Url.Parser.Query module to write our URL parser this time:
import Url.Parser exposing (Parser, (</>), (<?>), int, map, oneOf, s, string) import Url.Parser.Query as Query type Route = BlogPost Int String | BlogQuery (Maybe String) routeParser : Parser (Route -> a) a routeParser = oneOf [ map BlogPost (s "blog" </> int </> string) , map BlogQuery (s "blog" <?> Query.string "q") ] -- /blog/14/whale-facts ==> Just (BlogPost 14 "whale-facts") -- /blog/14 ==> Nothing -- /blog/whale-facts ==> Nothing -- /blog/ ==> Just (BlogQuery Nothing) -- /blog ==> Just (BlogQuery Nothing) -- /blog?q=chabudai ==> Just (BlogQuery (Just "chabudai")) -- /blog/?q=whales ==> Just (BlogQuery (Just "whales")) -- /blog/?query=whales ==> Just (BlogQuery Nothing)
<?> operators let us to write parsers that look quite like the actual URLs we want to parse. And adding
Url.Parser.Query allowed us to handle query parameters like
Okay, now we have a documentation website with addresses like this:
We can use the
fragment parser from
Url.Parser to handle these addresses like this:
type alias Docs = (String, Maybe String) docsParser : Parser (Docs -> a) a docsParser = map Tuple.pair (string </> fragment identity) -- /Basics ==> Just ("Basics", Nothing) -- /Maybe ==> Just ("Maybe", Nothing) -- /List ==> Just ("List", Nothing) -- /List#map ==> Just ("List", Just "map") -- /List# ==> Just ("List", Just "") -- /List/map ==> Nothing -- / ==> Nothing
So now we can handle URL fragments as well!
Now that we have seen a few parsers, we should look at how this fits into a
Browser.application program. Rather than just saving the current URL like last time, can we parse it into useful data and show that instead?
The major new things are:
updateparses the URL when it gets a
viewfunction shows different content for different addresses!
It is really not too fancy. Nice!
But what happens when you have 10 or 20 or 100 different pages? Does it all go in this one
view function? Surely it cannot be all in one file. How many files should it be in? What should be the directory structure? That is what we will discuss next!