Categories
Uncategorized

Being a Beginner: Running your first Half Marathon

In March 2023 I ran the Cambridge Half Marathon. It was my first half marathon. I wanted to do something worthwhile and that was bigger than myself. I raised almost £1000 for Mind in the process, and also improved my mental health by having a non-work goal to strive for.

I’m still firmly middle-to-back of the pack when it comes to racing, but I did beat my own goal by achieving a time of 1 hour, 47 minutes, and 48 seconds (my target was to beat 1:55:00).

Afterwards, a friend, who turned out to do relatively even better in their respective race, asked for some tips and how I prepared. Since I’d already written this up for them, I figured I’d post it as a blog post. There’s obviously a lot of guides and how-tos for this, although I’d say only a few are written from the perspective of someone new to sport, so hopefully this gives a slightly different perspective.

The regime

My starting point was being reasonably used to running, including some distance – I’d done 21k before but it took over 2h – and had eight weeks available for specific training.  I ran 3-4 runs per week, ramping up from 25km per week up to 45km per week near the end. Here’s some of the principles/ideas I followed for my training plan and preparation:

Include lots of low intensity distance: A lot of coaches advocate for doing a significant amount of training at lower intensities.  You can search for MAF training, 80/20 training or various others.  I’m not sure this directly translates from pros to amateurs (especially the 80% bit), but I followed the general principle and did a lot of distance at low heart rate (around 140-145bpm for me).  This helps to get the training volume/distance in and the aerobic fitness benefits without risking injury or causing excess fatigue that stops/slows down later training.  I definitely saw the paces I achieved at these heart rates improve over the training block.

Include pace/workout variety: Along with the above, it seems widely accepted that doing a variety of paces during training is recommended.  My training included:

  • Slow runs (targeting low HR, 140-145 bpm – maybe just under 6:00/km for me).  These started around 6.5km up to 19km for the weekend run.
  • Interval sessions: maybe the best one but it was very taxing was 6x800m progressing from 4:50 down to 4:20/km – or a bit slower than your best 5k to a bit faster with 800m slow jogging between).  This is apparently good for increasing lactate threshold, VO2 max, and speed generally.  Another session I tried was 6x [2 mins almost the fastest I could do (around 4:10/km for me) + 3 mins jogging recovery] with warmup and cooldown.  I’m not sure how effective this workout was for me (fast parts too fast, requiring too much recovery to maintain in between and not getting the average HR high enough, maybe).
  • Progressive runs (e.g. 8km starting at 6:00+/km and speeding up every 1-2 kilometers up to sub-5:00/km for the last one). Good for getting experience speeding up or not slowing even when your legs are tired.
  • Long runs: some were low HR, others incorporate another workout within them.  To not overdo training intensity, I alternated these each weekend between a slow one and one that included one or more tempo (moderately-hard) sections.  When I started, I was exhausted after the tempo version that was 16km at 6:00/km with the middle 5k speeding up to 5:15/km.  Later I did one that was 3x(3km @ 5:15/km and 1km slow) + warmup/cooldown, this also felt exhausting at the time but was clearly showed improvement on the previous one.  Both were slower than my final race which was 21km at 5:06/km average.

Nutrition: For the race itself I used High5 energy gels. I took my own and avoided aid stations as they were crowded. I don’t particularly like using a lot of “fake” nutrition but it’s needed when you’re working hard in the race (as I found out when I ran out of energy on an early training long run), and it’s also worth practising on the long runs.  I would use 1-2 on long training runs, and had 3 during the race. Practising helps you to find ones that are compatible with your digestive system and can consume easily whilst running.

Sleep: Is generally considered important for proper recovery from difficult workouts, and I can definitely agree that my performance is better after proper sleep than otherwise.  Related to this is that alcohol disrupts sleep, even a single drink can make a notable difference.  I cut out most alcohol for the duration of my training.

Taper: Slowing down training close to a race can make a big difference –I reduced volume by 30% starting two weeks before and only did low-intensity + 1km fast at around 30% of maximum distance in the 7 days leading up to the HM.  I am very sure this made a big difference for me: some people say they feel niggles more in the taper period as healing/recovery happens, and I certainly noticed this/was worried about it, but I was totally ready on the day.  For longer training blocks I’ve heard people insert “recovery weeks” periodically, although in my case I think I didn’t have enough time to make this work.

Equipment

I really tried not to buy anything fancy.  I didn’t want my running to become about having fancy kit and spending money.

Watch: My running watch is a second-hand Garmin Forerunner 245.  It’s great and nothing fancier is needed, but it’s definitely worth having a watch suited to this as I found my Fitbit (daily watch) to be unreliable and inaccurate.  Conversely, the Garmin is a bad daily watch because it has the worst sleep tracking out of Fitbit/Google and Apple.  That being said, I didn’t buy the watch until after over a year of just using my phone.

Shoes: My shoes are Altra Escalante Racer (which are good if you’re used to barefoot shoe-style walking, otherwise you probably want to find something different as these have no heel rise) at around £115/pair (though they don’t last well, I start mixing in a new pair after around 500k and they’re dead at around 800km).

Foam roller: I got this when I had a bit of calf soreness and a friend said it worked miracles when they had similar issues.  I would agree and I got an AmazonBasics one for <£20.  That being said, I also was careful not to overtrain, and generally increased intensity by no more than 10% per week or between comparable sessions.

Otherwise, I have shorts from Amazon (<£20/pair), technical running underwear (I got wool-rich ones but they aren’t available anymore), and basic technical T-Shirts (I’ve been using walking ones I’ve had for many years…).

I subscribe to Strava but honestly it’s not that necessary.  Garmin Connect, the free app that you get access to with their watch, has almost everything that Strava does apart from the social aspect of it.

Conclusion

The main thing that I didn’t do that i think would’ve helped was some kind of strength training for core and legs, which I’ll probably look into for future.

I hope this gives someone a helpful start. Post how you trained for your first half in the comments, or feel free to ask questions!

Categories
Uncategorized

Smart under-cupboard kitchen lighting

Our kitchen is almost 10 years old now. When we first had it designed and installed, I totally underestimated how important good task lighting was, but thankfully the designer knew better and included a set of under-cabinet lights to illuminate the work surfaces. The lamps in these faded, were replaced and the replacements starting failing, so now it’s time to take on installing some nice smart lighting to make things even more functional.

Image showing old lighting, with dark and light areas caused by spot lights.
The old lighting (excuse the temporary backsplash behind the cooker!)

I’m a big fan of dimmable lighting to set the mood at night and reduce eye strain that also can provide practical lighting for tasks when needed. I also enjoy lighting that has adjustable white points since lots of blue-light exposure at night seems to be correlated with lower sleep quality.

I decided LED tape and a semi-custom installation (separate controller/wiring, but not going so far as coding up my own controller via Arduino or similar) was the way to go as it has grown in popularity and maturity over the last few years. I’ve replaced all the light fixtures with a set of three cool-white/warm-white adjustable strips from LEDspace and then used a ZigBee controller to connect it with the Phillips Hue bridge I already have.

Here’s the story of the outcome and what I did.

Was it worth it?

I’d say a qualitifed “yes”. I like that the LED strip eliminates bright/dark areas that we had with the old GX53 fittings. I like being able to dim and adjust the lighting, but…it’s also a bit of a chore. I think the real value in this kind of installation comes from automation. The smart controller we used works well with Phillips Hue, though it doesn’t seem to have the “on startup” configuration and instead will switch on the whatever the last setting was. The 9W/m tape I used is just about bright enough, but I have a niggling wish for a bit more brightness. In total we have 3m of 9W tape, so 27W, compared with 7×4.5W lights before for a total of 31.5W.

After the novelty has worn off, I at least personally won’t be manually adjusting the lights on a regular basis. Unless you’ve made all lighting in the room smart, in which case you can use a scene switch to replace rather than augment the standard light switch (and even then, retrofitting is going to be non-trivial), it will need to work correctly with the existing switching. The options for manual switching are voice (via Alexa/Google/Siri), app (e.g. the Hue app), or a separate scene controller (such as the Hue remote; but then this introduces annoying user experience issue from the interaction with any existing switching).

I’d like to do two things in future: get smart control on more of the lights in the kitchen and dining area to make scenes (set via either Alexa or a remote control) more functional, and also consider integrating presence sensing to dim the lights automatically.

Getting started

The first hurdle was to figure out the wiring. Having decided I wanted a colour-temperature controlled LED strip, I knew I needed three wires to the LED strip. The existing installation consisted of a set of lights and a light on its own, switched from one place. The light on its own was fed a switched mains in a 2-core-and-earth cable from a junction box near the group of lights. The group of lights were fed the same switched 240V mains supply and were connected via a splitter with custom sockets that fed each light fitting.

Choosing an LED strip type

Choosing the type of LED strip can increase or decrease the complexity of the wiring. I was able to perform the nasty hack of re-using the mains cable that was already fed through the walls to the second location as a 24V supply with two rails (for warm and cool white), since the 24V doesn’t require a double-insulated cable which means I could use the wire normally used for earth in a mains setup as one of the rails. The LED strips options you can choose are:

  • Fixed white: This is the simplest option. You can buy a fixed colour temperature LED strip, which would require a simple two-wire connection. It’s probably the most flexible in terms of frequency of cut-points, manufacturers, and varieties such as splash-proof and “spot-free”/chip-on-board. Since these are widely available you can get brighter strips easily and with good colour rendering indexes (CRIs). The CRI describes how much of the colour spectrum is emitted by the light: higher values mean better reproduction of colours under the light.
  • Colour-temperature changing white: This is the option I went for. Like the single-chip type white strips these are dimmable but have two colours of LED (cool and warm white) that can be used in combination to make a broad range of colour temperatures. They typically require three connections (+24V, warm white, and cool white). Since the extra chips take up physically more space the cut points seem to commonly be less frequent: the strip I have can be cut at 100mm intervals instead of 50mm on many of the single-temperature strips.
  • RGB: These have LEDs that can emit different intensities of red, green, and blue to make a full range of colours. They typically require four connections, can often be less bright than similar white strips and don’t render whites as nicely as a dedicated white strip. For the kitchen task-lighting I was installing I ruled out this type of strip because I wanted to optimise for good white rendering. Also in my case I needed to re-use an existing three-core cable.
  • RGB-W and RGB-WW: These are like RGB strips but also have either one or two dedicated white LEDs to make the rendering of whites as good as regular white-only strips. They’re the most expensive and complex, requiring either five or six connections.

Powering and controlling the LED strips

The LED strip will require either 12V or 24V power, depending on the type you chose. Mine required 24V. You can wire many LED strips in parallel since each group of LED chips between the cutting points are in parallel on the strip. Wago connectors to join all the wires of the same type seem to work well and are easy to install.

A common wiring would be a power supply to convert 240VAC mains to 24V, then a controller module to do dimming and colour temperature control, and then each strip wired in parallel. In my case, I was able to wire all the strips to the controller directly so they appear as a single light in the Hue app. If you aren’t able to do that, for example if you have some strips that have a separate power supply, then you can always use multiple transformers and controllers though in this case the LED strips will show up as separate lights in the Hue app. This is pretty normal for Hue though: lights can be grouped and controlled together and this model works well, so even if it might be more expensive it’s a reasonable approach if you need it.

Here’s the wiring diagram I posted on top of the cabinets next to the wiring for my installation. Nothing’s perfect, so I left the warts unhidden here: I ran out of black/red/white three-core cable so had to switch colours partway through, and I also re-used a mains cable as a 24V supply since I couldn’t re-pull more obvious cables through and felt that buying a separate transformer and controller just to avoid this wasn’t worth it.

Wiring diagram

Physical installation

In my case, the physical installation wasn’t particularly tricky. I turned off the old downlighting, checked it wasn’t live, and removed it. I was able to do this in stages — having a young child limits how much time you get to work at once on these things! — by temporarily powering the transformer for the partially-installed new lights from the splitter that was part of the original installation, alongside the remaining old light fittings. The old fittings were screwed onto the bottom of the cabinets, and I was able to reuse the screw holes for the clips that hold up the aluminium profile that houses the new strips.

Wiring

There are friction-fit connectors that you can use to wire up the strips, or do as I did and simply solder the wires onto the pads. I’m not new to soldering, but I did have to remember some tips from the years ago when I last did it: heat up the pad and feed the solder onto it, use enough solder but not so much it bridges to another connector (I was using a particularly fine reel of solder intended for surface-mount work, so was prone to using too little), twist the strands of wire tightly together by grabbing it at the very end and rotating it, add solder to both the pad and wire before re-heating the pad and joining the two to tack it into place and, finally, check your connections using your multi-meter’s audible continuity testing feature.

A join of two LED strips at 90 degrees
Corner fitting

I had a corner to deal with, and decided to use a corner-connector to get a very tight, clean connection rather than attempting to solder wires at 90º. This worked well although the connector did not fit into the aluminium profile I had, so to fit it I had to first measure and cut the tape, connect it at right angles, solder the wire connection onto one side, and then finally remove the backing tape and stick it onto the aluminium profiles held at a right angle. I assembled the two pieces before installing it into the retention clips. Determining the lengths of these strips and getting the profiles cut to the right length seemed daunting initially, but was pretty easy once I realised that I had around 5cm wiggle-room on length, and that the position (depth into the cabinet) was already determined since I was using the existing screw-holes cut for the previous fixtures.

Housing

I used the adhesive tape on the LED strip to fit it into aluminium profile. This has a diffuse plastic cover that can be applied to reduce the appearance of spotting; it also helps to remove some harsh shadows from the edges of the LEDs. I was originally concerned that fitting the strip into the profile might reduce the angle of light emitted, but it turned out not to be a problem. The profile and covering can be cut easily using a hacksaw.

Don’t forget, like I did, that the aluminium profile is conductive, so make sure you tape over your contacts before test fitting the strips. Luckily, the controller had overcurrent protection so shorting it didn’t cause any permanent damage.

ZigBee controller

Gledopto LED dimmer/ZigBee controller
LED controller

The ZigBee controller I used is the Gledopto WW/CW controller. It integrates well with Phillips Hue and works pretty flawlessly in the app. As mentioned above, it doesn’t support an ‘on startup’ setting, but other than it’s no different than the other Hue lights I own.

Parts and tools

Here’s a list of parts I used:

Tools you will probably need include:

Conclusion

Rubbish bag with old lighting parts in it.
Got rid of quite a lot of stuff when taking down the old lights

Overall, this was a fairly simple, fun project and is an improvement over the previous lighting setup in the kitchen. The main thing I’d consider doing differently is to consider a brighter LED strip, but other than that it’s working nicely.

Thanks for reading! I hope this was helpful if you’re considering a similar project. If you have questions or want to learn about my future projects, follow me on Twitter for more.

Categories
Uncategorized

Everything you need to know about integrating Google Sign-in with your Web App

Investigating Google Sign-In led me down a rabbit hole of trying to understand authentication with Google, how it fits with standards such as OpenID Connect (OIDC) and OAuth 2.0, and how best to integrate it into a web app that has a JavaScript frontend and backend API.  This article is the result of that research.

On the face of it, integrating with Google Sign-In is trivial: there’s a Google page that tells you how to do it.  But I believe in understanding the tools you’re using rather than just pasting code from random websites, especially when it comes to security-related functions.  I wanted to understand what was happening under the hood and be sure that I wasn’t inadvertently introducing security holes into my app.  So I created a simple sample app that has just enough function to demonstrate how to safely integrate sign-in to your app.

Check out the sample Hello app on Github that demonstrates the topics discussed in this article.

The app

helloapp3

The demo Hello app is a React-based web app that talks to an API assumed to be in the same domain.  The app that led me to start this work — the BoilerIO open source heating controller — is very similar: it is close to a single-page app (I’d say it is one, but the definition of an SPA is a bit vague so I’m going to avoid absolutes here) with a JavaScript (React) frontend, and an API it interacts with (intended to be hosted in the same domain) written in Python using flask. The goal is to add login to the app using Google Sign-In.

In the BoilerIO app I wanted to restrict login to a set of known people, authorised by an administrator, but this probably isn’t the common case so I’ve not included that in the sample Hello app.

When to use this approach

Authentication and authorisation can get complex when your application needs to deal with multiple identity providers, and even more so when you’re dealing with non-cloud identities.  The scenario we’re dealing with is one of the simplest: a single, cloud-based identity provider aimed at consumers (“social sign-in”).  If you have a more complicated scenario — your own username/passwords, multiple providers (Google Sign-in, Login with Amazon, Facebook, etc.), or enterprise directory services (e.g. Active Directory) then it gets a whole lot more complicated and you might instead use a service like Amazon Cognito or Auth0 to do the hard work.

This doc is about Sign-In specifically: you might also want to get authorization to use resources that the user owns in the third-party domain, for example getting access to their Google Drive or Calendar for use within your application.  There are additional considerations in this case around security of the tokens and choosing the right flow for your application that aren’t covered here.

We’re using the Google SDK to do sign-in, with a lightweight wrapper in the form of the react-google-login frontend library.  This is a good choice if you want just Google Sign-In.  If you want to support multiple providers then you could still go this route (other service providers such as Facebook have their own SDKs too), or you could use a service that gives you multiple options via a single interface (such as Cognito mentioned above), or use a generic OpenID Connect implementation for the providers that support it.  Using the SDKs removes some complexity and therefore (hopefully) security risk from your implementation.

How does Google Sign-In work?

Google Sign-In via the SDK is built on Google’s OpenID Connect (OIDC) implementation.  It uses extensions to this, the so-called “Identity Provider (IdP) IFrame”, so some of the recommendations for securely implementing OIDC don’t apply directly.  The “IdP IFrame” mechanism was described in a draft RFC published to the OIDC mailing list, but I wasn’t able to find any follow-up to this, and it’s likely that the implementation has progressed since that draft was published.

The SDK provides a signIn method that you can call as well as a way to generate a button the user can click on that is equivalent to calling this method.  This initiates a user flow that, by default, pops up a window to allow the user to authenticate to Google.  This will ask them to give permission to your application for the “scopes” you have requested if they haven’t already provided this permission before.  Scopes are the things that you are requesting access to; in the Google console you can select which scopes your credentials are allowed to be used for, and you should pick the smallest set possible.  For a sign-in use case, you only need to ask for “openid”, “profile”, and “email”.  You could also get access to the user’s Drive contents, Calendar, or other resources, by adding appropriate scopes here.

Once the user completes the sign-in process your application will receive back an access token, a refresh token (if the “offline” type was selected), and an ID token.  The ID token is a JSON Web Token (JWT) that contains claims about the user’s identity.  Your API must validate this (since a malicious user could easily inject a fake JWT) and then it can be used as proof of the user’s identity.  Within your app, you can treat this a bit like a correct username and password.

The other tokens aren’t as relevant for sign-in/authentication use cases.  You can use the access token to access Google resources belonging to the user in the scopes that you requested.  If you only specified the limited scopes suggested above, this will give you access to the userinfo endpoint, which can be used to obtain information about the user the access token was generated for.  The information this provides is a subset of that in the ID token you already received.  All of these token have an expiry, but the refresh token has a longer expiry than the others (typically) and can be used to get new tokens.

The client-side Google code for the SDK is closed-source (or, at least, I couldn’t find the source) so it’s hard to say exactly what it’s doing, my guess is it’s using a grant similar to the Implicit OAuth grant, but with some additional security bought by use of its own code within the IFrame and the way it gets the credentials to the calling application (using HTML5 local storage as a relay between the Google-originated IFrame/authentication popup and the client application, rather than via a redirect as would be used in plain OAuth 2.0).

Adding Google Sign-In to your application

When adding authentication to your application, you’ll need to:

  1. Create OAuth credentials in the Google API console (since this is used in the Sign-In process).
  2. Add flask-login to your app to manage user sessions.
  3. Add authorization checks to your existing service endpoints.
  4. Implement some way of storing users and representing them, and then link this with flask-login.  Typically this would be in a database.
  5. Implement endpoints to log users in and out and to provide information about the user to your frontend/client.
  6. Modify your frontend/client(s) to check for 401 (Unauthorized) responses from your server, and offer the option to log-in when these are received.
  7. Add a login page to your frontend application.

We’ll go through each of these in turn, and each section will cover in more detail what you need to do along with pointers to the sample Hello application.

Creating OAuth Credentials for your service

What do I need to do?

  1. Go to the Google API console and create OAuth credentials for your app.  You can choose the “Web app” option when creating the credentials.  You’ll also need to configure your “Consent screen” if it’s the first time you’ve done this.
  2. You only need the Client ID when using the Google Sign-In SDK: expose this through configuration to your frontend and backend.

In the Hello app: The Client ID is exposed through the GOOGLE_CLIENT_ID configuration entry in the Flask backend.  We use Flask’s standard configuration support to load configuration from a Python file named in the environment (HELLO_CONFIG).  Although it’s not a “secret” the client ID is still better handled through configuration than being checked into the code.  In the React application, we use an environment variable (REACT_APP_GOOGLE_CLIENT_ID) in the .env file.  This is built into the distribution at build-time and visible to clients.

The Google Sign-In SDK uses OAuth under the hood, and you need to pass it an OAuth Client ID for your application.  You can create this as in the Google APIs console.  Some of the choices available when doing so are:

  • Application type: Google limits what can be done from certain client IDs, for example which OAuth 2 flows (implicit grant, authorization code) can be used.  You should select “Web client” here.  It’d be nice to see Google publish more details about what these choices allow/disallow; for example, for a website with a backend it might provide more security if you could require that the auth-code flow is used.
  • Authorized JavaScript Origins: These restrict where Google’s authentication flow can be initiated from.  You should put any origins that you host your site from, including localhost for testing, here.  You’ll need to provide the full origin including protocol, hostname, and port.
  • Authorized redirect URLs: Leave this blank.  In a normal OAuth flow, the identity provider uses a redirect back to your site to get credentials to you (the access/refresh token).  When you’re using this Google Sign-In SDK, this happens outside your application using the storagerelay URI scheme, so there is no redirect back to your site.  As a side-note: If you step outside the sign-in flow discussed here and use OIDC directly with the Auth Code grant type then, if you’re getting your token from the token endpoint, you need to specify the same redirect_uri parameter in both calls (for the code and the token) and you might want to set this to postmessage rather than an actual URI if you’re getting the tokens from the token endpoint rather than a redirect.

As an aside, localhost is a valid option for the URLs above if you’re using a standard OIDC/OAuth flow rather than the Sign-In SDK because the redirect is handled by the user’s local browser and so localhost refers to the user’s host in that context.  You will want to include that when testing your service locally.

If you make changes to your OAuth credentials (such as adding a new authorized JavaScript origin) it can take several minutes (sometimes over an hour) for this to propagate to Google’s servers, so be mindful of this when testing updates to settings.

Adding authorization to your service

What do I need to do?

  1. Add flask-login as a dependency to your application (using pip install or pipenv if you are using this for package management).
  2. Add the login_required decorator to all methods that should require the user to be logged in.
  3. Create a secret key for your app to use when signing sessions.

In the Hello app: The flask-login dependency is in the Pipenv file.  In app.py we instantiate a LoginManager at the start.  The secret key is part of the configuration.

For a single-page app on the same domain, the flask-login extension to Flask helps you to implement session handling in your backend.  Flask-login lets you implement your own logic to authenticate a user, but does the work of managing the session and cookies for you.

You will provide a “login” endpoint that validates the credentials provided and sets a session cookie (using the login_user method provided by flask-login).  This cookie is then passed on each subsequent request, validated by the flask framework for you, decoded to determine the currently logged-in user, and the user information is available to your application code via the flask.current_user variable.  You can also decorate your endpoints with the @login_required function to ensure that only logged-in users can access them.

You can treat a valid ID token from the identity provider as equivalent to a username and password: your “login” endpoint takes this ID token, validates it, and logs the user in if it is valid.

Flask sessions include a message authentication code so are tamper-proof, but are not encrypted (see the documentation).  They’re also stateless, so there shouldn’t be any concern around horizontal scaling of your service.  Flask by default (as of writing) uses the HMAC-SHA1 algorithm, so a key length of at least 160 bits (20 bytes) is desirable.  As noted in the Flask documentation, you should generate this from the os.random function.  To improve security, we set “session protection” to “strong” in the login manager, which prevents the session cookie from being used on a different host, and set the “Remember me” cookie to have the HttpOnly flag so it can’t be read by client JavaScript code.  You could also add the “secure” flag to these two cookies, so that they are only sent over HTTPS (and not HTTP) – you could consider doing this in your production configuration but not in the testing configuration to make development easier.  Setting HttpOnly prevents code on the client accessing (and therefore potentially leaking) the session token.

Why not use JSON Web Tokens (JWTs)/bearer tokens, passed via the Authorization header on each request?  Doing so is actually pretty similar to the sessions approach described above, except that a JWT is used and passed in a different HTTP header: both are tamper-proof, unencrypted, encoded blobs of data.  The origin of the JWT could be your application (generated in your login method, then passed by the client on subsequent API calls) or the JWT given to your application by the identity provider.  This method isn’t the recommended one because, in the case of handling your own JWTs there’s no advantage over using Flask’s built-in session support, and it has the disadvantage of requiring additional code or dependencies in your application.  Because JWTs can be large and need to be passed in the Authorization header, you can’t store them in an HttpOnly cookie and your client code has to handle them.  In the case of using the Identity Provider’s (Google’s) ID token (which is a JWT), the main issue would be that this is short-lived so you’d have to call the Google Sign-In SDK again to get a new token.

Managing users

What do I need to do?

  1. Implement a way to store and retrieve user information, “user loader” (a method that looks up a user by ID, called by the flask-login code), and “User” class with at least an ID attribute.
  2. You’ll also need a way to log users in: see the next section for more information.

In the Hello app: The UserManager class in the Hello API implementation is a trivial in-memory store of users.  It’s keyed by Google Subscriber ID: In your application it is probably better to store a synthetic primary key as a user ID and use this to identify users.  This gives you flexibility later to add other authentication providers.

The load_user method is decorated with the LoginManager instance’s user_loader and does a lookup of the user ID passed to it.  This will be called when a valid session cookie is received containing the logged-in user ID to get the full User object back.

The User class uses the UserMixin to provide basic requirements; note that you must implement an id property.

Typically you’ll have one or more database tables to represent users.  It’s worth considering that you might want to allow the same user to sign in with different providers or credentials in future (username/password, Login with Facebook, Login with Amazon, Google Sign-In, etc.).  Each provider will give you a “subscriber ID” (the sub field in the identity token) as part of the user’s identity that is guaranteed to be unique for that identity provider (see the OIDC standard, which says that the subject identifier is a “Locally unique and never reassigned identifier within the Issuer for the End-User, which is intended to be consumed by the Client.”).  So, a simple solution could be a column per ID provider.  The article Merging multiple user accounts by Patrycja Dybka suggests a more sophisticated alternative, based on what the StackOverflow site does, using a separate table to store the user identities, as well as discussing how this could be presented to the user.

You should consider how to deal with users logging in using the Google Sign-In feature that have not used your site before.  The Hello app example doesn’t deal with this, but you could return a value to your client to indicate that additional user information is required in this case if needed.

Implement “login”/”logout” endpoints for your backend

What do I need to do?

  1. Extend your API to facilitate login/out.  You could consider using the google_token module provided in the Hello app to do your token validation.

In the Hello app: We expose a single endpoint (/me) which has a POST method to log in using an ID token, a GET method to retrieve information about the currently logged-in user, and a DELETE method to log the user out.

To validate the ID token, we provide a convenience method that calls the Google Python SDK: this is a trivial wrapper that creates a CachedSession object that the SDK will use to make outgoing HTTP requests.  By using the cachecontrol library here we honour the HTTP headers when implementing our cache (so that, for example, Google’s key rotation will not cause our logins to fail, but also we don’t cause outbound traffic to scale with calls to our login function).

The login endpoint validates the user credentials (the ID token provided by the Google Sign-In process) and either logs the user in (sets session cookie by calling the LoginManager’s login_user method) or returns an error (403 Forbidden).

To validate the signed ID token, a number of checks must be made.  This is all done for you through the Google SDK.  The Hello app has a convenience method you could copy in the google_token module that deals with caching outgoing HTTP requests.  The checks in summary are (i) validate the signature using Google’s public keys (which need to be fetched from the correct Google location), (ii) check that token has not expired, and (iii) check that the token was intended for your application (so that a token intended for another application cannot be injected).  You can find out more about these checks in the Google documentation.

Adding a login page to your client

What do I need to do?

  1. Import the Google Sign-In SDK.  In our React app, we’re using the react-google-login npm package.
  2. Implement a login page.
  3. Redirect to the login page when you get Forbidden responses to API calls on the client

In the Hello app: We use the react-google-login package.  The main page uses a HashRouter to implement navigation between pages of the app.  There are two pages: the main “Hello” page and the “Login” page.  We implemented the ProtectedRoute component to encapsulate logic to redirect to the login page if the user needs to authenticate.  In the main app we also use an AppBar with a profile icon that the user can click to log out, to demonstrate using information from the user’s profile.

Once you’ve implemented authentication on your service, modifying the client to authenticate should be relatively easy: in React we do this by keeping some state that indicates whether the user needs to be authenticated.  When an API request fails with an error indication lack of authorization, this state is updated indicate that authentication is required.  When authentication is required, we render any “protected” pages as redirects back to the login page.

Similarly, if the user is authenticated already we redirect to the homepage.  Some clients would take a slightly more sophisticated approach by keeping track of which page to return to after authentication is complete.  If you’re doing this be careful not to introduce open redirect bugs (see the OWASP cheat sheet on Unvalidated Redirects and Forwards for more info).

The login page itself includes the Google Sign-In button, and a handler to send the ID token we get handed to our backend for validation and to sign the user in.

Add cross-site request forgery (CSRF) protection

What do I need to do?

  1. Any mutating operations need to be protected with a CSRF check: a simple option is to use the X-Requested-With header.

In the Hello app: The Hello frontend app adds X-Requested-With as a default header sent by Axios.  In the backend, the csrf_protection function in app.py checks for the header and returns HTTP 403 (Forbidden) if it does not exist, and all mutating operations are decorated with this check.

Unless the target host explicitly allows it, browsers will block cross-origin requests as part of their cross-origin resource sharing (CORS) checks.  However, in cases such as POSTs, the call to the server is actually still made, but the result of the call is not available to the client.  This means that an attacker hosting a random site could use your authenticated session (via the session cookie) to call your API and make side-effecting requests despite the cross-origin request protection provided by modern browsers.  This behaviour is described in this excellent article: CSRF Mitigation for AJAX Requests, along with the mitigation suggested.

Adding the X-Requested-With header prevents the browser from making the cross-origin request (since this header is not allowed to be sent across origins by default) without doing pre-flight checks first.  The response should indicate that the call isn’t allowed and it will be blocked.  Adding the header is easy with axios:

axios.defaults.headers.common['X-Requested-With'] = 'XmlHttpRequest'

And to check for it in Python you can decorate your request handler with this:

def csrf_protection(fn):
    """Require that the X-Requested-With header is present."""
    def protected(*args):
        if 'X-Requested-With' in request.headers:
            return fn(*args)
        else:
            return "X-Requested-With header missing", HTTPStatus.FORBIDDEN
    return protected

How secure is this?

It depends if social sign-ins meet your application’s requirements.  If implemented correctly, then this flow can authenticate a user and give you similar confidence to that user entering a password that they are who they say they are.  But, you don’t control the sign-in flow so you’re relying on the policies of the identity provider (such as when password re-entry is required).

Using a scheme like this instead of a username/password system means you don’t have the risks associated with password management (making sure to use an appropriate hashing schemes for example).  Even still, you’re almost certainly storing sensitive information so you need to handle it accordingly.

Social sign-ins such as Google implement features such as two-factor authentication, which can increase the security of the sign-in.

When using social sign-in, the “fresh login” feature of flask-login looks somewhat questionable: often, after initial consent to allow login to your app, even logging out and then back in won’t require the user to enter their Google password as they are still logged in with Google.

Ultimately, when using social sign-in you’re not really in control of session policies since a new login can often be started without a password being entered.  For sites where this is unacceptable (you probably wouldn’t want your bank doing this, for instance) you’ll have to use an alternative.  For many sites it is good enough though: you’re protecting your application on the same basis as the user’s GMail account, which is often a high value asset.

Some (partial) gory details

OAuth 2 and OpenID Connect

There are various versions of the OAuth and OpenID standards, but the relevant ones are OAuth 2.0 and OpenID Connect.

OAuth 2.0 is a standard to enable authorization of users on third party sites (or the sites themselves) against resources on a different service: e.g. my heating app to access a user’s Google Calendar on behalf of that user.  On it’s own it doesn’t provide for authentication, but simply provides a process where a user authenticates to a third-party identity provider and allow an application access to the resources belonging to them on that site.

OpenID Connect builds on top of the OAuth 2.0 flow by specifying details of how particular flows work and what certain values that are returned as part of the flow should look like, in particular that an ID token is provided that can be validated as authentic.  This does allow for authentication of a user to your application by having them sign into a third-party identity provider such as their Google account.

In writing this article and implementation I found a lot of confusion about how these standard protocols relate to the Google Sign-In SDK.  I had assumed the Google Sign-In API provided was a simple helper to use OpenID Connect in your application (which isn’t exactly true), and therefore recommendations for the latter were relevant.  However, the SDK actually implements a customised authentication flow that extends OIDC and that relieves the developer of some aspects of the security implementation (such as ensuring that there is a unique and tracked state parameter in their authorization requests).

The IdP IFrame and storagerelay

Google’s Sign-In JavaScript SDK uses an iframe that is added to your site where Google pages are loaded to help with authentication.  There was a draft standard published for this method that I found when trying to understand why a redirect URI starting “storagerelay:” was being used.  This was published by Google engineers to an OpenID list here: http://lists.openid.net/pipermail/openid-specs-ab/Week-of-Mon-20151116/005865.html.  The iframe uses HTML5 local storage and storage events to communicate status back to your application, allowing it to use Google-controlled pages to complete the authentication without redirecting back to your site in the traditional way.

It’d be great to have more details of the implementation of the Google SDK to include in this section to be able to better understand how the security concerns compare with using one of the standard OAuth grant types, what to do if you only need identity, and how to safely avoid tokens getting onto the client.  The SDK does support the authorization code flow (which allows for this) but as far as I can see you can’t stop the non-auth-code flow being used.  Even though the main risk regarding token leakage through browser history is addressed through the IdP IFrame implementation, this isn’t well documented as far as I could find and it would seem safer to be able to prevent clients receiving tokens directly through configuration.  This is especially true if you’re using scopes beyond just ID.  The OAuth2 website describes the different grant types available: https://oauth.net/2/.

Conclusion

This article covers a holistic view of integrating Sign-In with Google to your website and hopefully gives you the confidence to do so safely.

I’m keen to make sure the content here is accurate, and that the Hello app example is secure and good quality, so please do provide comments or share pull requests if you see any areas for improvement.

Links

Check out the Hello demo app on Github that implements everything described in this article.

Google Sign-In related, and packages used in this project:

OAuth and OpenID Connect related documentation:

And finally:

  • Not the Google API JavaScript Client on github.  This repository has a promising sounding name, but is just documentation with a placeholder main.js file, sadly.
Categories
Uncategorized

Heating season 2019 is here!

Having just got back from a holiday in Switzerland (which was awesome) we got home to a slightly chilly house, and so turned the heating back on.  Although I’ve not posted here for a while, work has continued over on the Github repository for Boiler.io and there are a few new features that I’ve added that are worth mentioning.  I’ve also replaced the crusty jQuery-based UI with a nice React application.  It’d be great to hear from you if you’re interested in running open source heating software. Why am I doing this?  Firstly, it’s a great learning exercise as my software engineering background has largely been in the world of operating systems, completely different than this kind of web/IoT application.  I hope it will be more than a toy though: as we become more and more conscious of the environment I think it’s increasingly important for technologies that help to save energy to be as open and accessible to everyone, so from enabling people to turn off their heating when not at home, through to making algorithm improvements to make boilerio more efficient will help all its users as well as provide a reference for others implementing similar systems.

Links to the software

New features

Multiple zones

Screenshot_20191227-215251As we have two floors in our house I wanted to control them independently.  In the first year of running boilerio, I only controlled the ground floor heating and used our existing system for the first floor.  Since implementing support for two zones, I controlled both floors of our house with no major issues for the whole of last winter. Much of the changes were fairly mechanistic; one key decision was how to change the schedule representation.  The minimum to get this working would be to have two instances of the controller running with separate databases, etc.  A step up could have been to have two clients running against a single server with the schedules represented separately; this is probably a classic microservices design.  One disadvantage would be that it’s harder to co-ordinate between the clients for features like timing the on/off cycles of the multiple zones to be out of phase to reduce load on the boiler. In the end I added zone IDs to entries in the schedule table that refers back to a new “zones” table where that zone is given a name and the temperature sensor ID that it uses.  This implies a 1:1 relationship between control zones and sensors, which could easily be improved upon in future.  The existing /schedule endpoint is now basically unused (though was updated to return a mapping of zones to their schedule) and instead there’s a per-zone zone/<id>/schedule endpoint.

Time to temperature

Screenshot_20191227-215332I wanted a similar feature to Nest whereby the heating is turned on before the requested time in order to reach the desired temperature by the requested time.  The starting point for this has been to add predictions that are shown in the UI as to how long it will take to get to the current target.   The current implementation was good to learn from but probably needs to be changed.  There is a separate program, monitor, that looks at what the boiler and sensors are doing and at the weather report (to determine the different between the inside and outside temperatures) and tries to determine the rate of heating.  It watches for periods where: (i) the heating has been on for at least ten minutes; (ii) after the first ten minute period that is ignored, it’s been on for a further continuous ten minutes and the temperature rose during that time.  We then take the duration of the interval and the starting and ending temperature to determine the rate of heating given the difference between the inside and outside temperature at the start of the period.  This published to the server.  The REST API then exposes a gradients API on the zone where the client can retrieve an aggregated view of the heating gradient for a given temperature delta (currently this gives the mean gradient for readings rounded to the nearest half degree Celsius).  The client can then use this information to predict a time given the current and target temperatures. gf_gradients Was it worth it?  The graph above shows a visual representation after running the system with measurements being recorded over one winter.  The line is the temperature gradient at a particular temperature difference, and the bar shows how many readings went into that aggregate value (so you can see we can probably safely ignore the outermost data points as not having enough input data).  I think it’s not totally clear; the definitely does seem to be a downward trend in the heating rate as the temperature difference increases, but there’s also some upward spikes including at the highest temperature differences.  However, there are several confounding factors:
  1. The boiler temperature could have been changed as the outside temperature changed, which would affect the rate of heating but isn’t observable by this method.
  2. The downstairs temperature sensors is in the kitchen, so can be affected by cooking being done at the same time as the heating is on.  Again, this isn’t currently observable by the system, although real-time energy use data for the house is available so could potentially help here.
  3. Similarly, doors being open or closed can make a big difference, as can room occupancy.  These are awkward to measure and account for.
For comparison, here’s the same graph for the first floor, which isn’t affected by the “cooker effect”: 1f_gradients Although I knew it going into implementing this gradients API, a better (if more time consuming to implement) method would be to actually record the time-series data for temperature and boiler on/off over time.  This way, it would be possibly to try out different algorithms on historical data, and also the logic as to what data to record would be moved off the client.  Luckily this is also quite a neat UI feature too, so is appealing as a next step for development.

Cleaning up the API

As the REST API was starting to get a bit bigger and was basically undocumented, I was starting to find it difficult to keep it in my head and thought it was time to document it.  In doing so, I’ve started to move it over to flask-restplus, where one of the advantages is that it gets Swagger documentation and UI automatically. Screenshot from 2019-10-06 22-27-04 One of the annoying things about this is that it doesn’t work well when hosted on a proxy server not at the root.  There’s a few threads about this issue online but no pleasant solution that I’ve found: I don’t want to have to modify the application to know its actual root, and I don’t want to modify the proxy configuration to know the app uses Swagger ideally (maybe that’s the better option).  I think there’s still room for someone to figure out a good solution here. There’s still work to do; I moved some of the zones APIs to flask-restplus and tried to clean them up along the way, but there are still improvements that could be made: e.g. the schedule API could be moved under zones since it lists out each zone separately anyway (and could probably do with some thought applying as to how to make it more RESTful and single-purpose — seeing target_override in there for example is somewhat cringe-worthy.

Conclusion

There’s still a lot of exciting avenues to explore: keeping a temperature history that can also be used to do machine learning for the temperature prediction, modifying the scheduler to start the heating early according to the prediction, integration with other home automation tools, and a lot more.  However, having used Boiler.io for two winters now, I’m really pleased with the stability and usability as it currently stands.  It’d also be great to enable other physical devices to work with it (e.g. some of the Z-Wave controllers), so if you have one of these and are interested in trying it please let me know.