Firebase Hosting With React & Vite in 2023

Creating a starter React app, and hosting it on Firebase Hosting, is very simple. First you create the app using Vite’s React template. Then, you initialize Firebase Hosting in the project. Finally, you run it locally to develop, and deploy it to Firebase when ready for the world to see!

Create a React app named my-great-app:

$ npm create vite@latest my-great-app -- --template react
$ cd my-great-app
$ npm install
$ npm run build

Initialize Firebase Hosting:

$ firebase init hosting

  • select account
  • select “Use an existing project” (or create a new one if you prefer)
  • select your project
  • public directory should be dist
  • answer y to “Configure a single-page app)?
  • answer n to Set up automatic builds?
  • answer n to the question about overwriting dist/index.html

Edit firebase.json and add this line:

"predeploy": "npm run build",

right after this line:

"public": "dist",

and save the file.

Finally, run your app! To develop locally, run npm run dev, and to deploy to Firebase Hosting, run firebase deploy

Midjourney Paints Members of Rush

I’ve been playing around with Midjourney, the AI image-generation tool available via a Discord server. While trying out a few ideas, one of my family members suggested to try “Geddy Lee as a bird,” which I proceeded to do. It came out interesting, so I thought I’d round out the Canadian rock trio with some oddball imagery. Here they are…

Geddy Lee as a bird by Midjourney
Geddy Lee as a bird by Midjourney
Alex Lifeson as a Lowes employee by Midjourney
Alex Lifeson as a Lowes employee by Midjourney
Neil Peart as a starship captain by Midjourney
Neil Peart as a starship captain by Midjourney

Creating a Basic Google Cloud Compute Redis Instance

I use Redis for many different purposes. Sometimes, I need a large amount of RAM so Redis can act as a key store for megabytes or gigabytes of key values. Sometime I just need distributed locking and small-scale caching. Either way, setting up a Redis instance is a task I find myself performing from time to time.

Regardless of the size of the compute instance, I tend to perform the same steps: Create an Instance, select the machine type, and then under Advanced options > Automation I use a script like this:

sudo apt-get update
sudo apt-get remove man-db -y
sudo apt-get autoremove -y
sudo apt-get install -y redis-server
sudo sed -r -i 's/^bind\s+(.+?)$/# bind \1/' /etc/redis/redis.conf
sudo sed -r -i 's/^protected-mode yes$/protected-mode no/' /etc/redis/redis.conf

The basic idea is this clears out man-db and any unused packages, installs Redis, and then modifies /etc/redis.conf for my purposes. And, yes, I tend to remove man-db on my GCP instances, it really speeds up any package updating later on, especially on the smaller machine types.

Starter Website

I often find myself setting up a website project, and all the configuration that goes with it. This set up task can be fairly repetitive, so I took some of the basic features I almost always start with, and made this project.

The project does have a couple “extra” features beyond an absolute basic website. Sometimes I want to just build a set of files I can host somewhere, other times I want to just deploy a Docker image to a cloud service.

For a build system, I’m using npm from NodeJS. For site packaging, I’m using webpack, which also provides a development server with live reload.

The repository is here: https://github.com/d4lton/jools. The name is a play on Jewels, which is a simple game I’ve been working on.

It doesn’t include a UI framework or CSS preprocessor, because those tend to go hand-in-hand, and I didn’t want to predetermine which framework I might want to use (if any).

It also isn’t set up for Typescript, even though I tend to use Typescript these days. I may create a Typescript version of this project, but converting to Typescript is fairly simple. I chose to keep this starter project as basic as I could.

The README.md should have enough instruction to get going. Follow the instructions under INSTALLING and RUNNING.

The structure of the project is hopefully easy to understand:

  • src: This directory (and any sub-directories you create) are where your JavaScript code should be. The src/index.js file is the “main” JavaScript file and will automatically be included in static/index.html.
  • static: This directory (and any sub-directories) are where your HTML and CSS files should be. The static/index.html is the “main” HTML file and will automatically include your JavaScript code.
  • config: This directory contains the configuration JSON file(s). Right now there is just the config.development.json file for local development. Also, it is basically empty. However, any properties you set in here will be “injected” into the static website and be available in the process.env object. For the curious, this is intended to mirror how environment variables are made available in a server-side NodeJS project.

The project doesn’t really do anything, as it is intended to be a starting place for a website. The included HTML, CSS, and JavaScript are all basically placeholders.

Hopefully this project helps you in some way! Good luck!

React in a Legacy App

I am working on a relatively old legacy web app. It’s a very complex system, and was built upon jQuery and uses requirejs for module loading. I have been slowly updating it to use ES5+ features and React. In particular, working with React in this app has been a bit painful. This app, you see, doesn’t have a build environment, so using webpack and babel aren’t available, at least not in the standard way. With a great deal of effort, it certainly can be made to work with a build environment, but my intention is to spend my energy on cleaning up the code and, when that task is complete, bringing in a standard build system. Ah, that will be great.

But, at least for today, I want to add some React code. I’ve done it before in this app, but since I have no build system, I’ve used React.createElement(). If you’ve tried to build anything more than a trivial Component using React.createElement(), you know that it gets old, fast.

So I want to just JSX, naturally. But with no build system, I wasn’t sure how to proceed. There are plenty of examples on how to use babel standalone and mark your script tags as "text/babel" to have your JSX code converted to JS on the fly. But, a couple problems. First, I’m using requirejs, and it doesn’t want to mark only some script tags as "text/babel" (it can mark all or none). I’ve even gone so far as modifying requirejs to only mark certain script tags as "text/babel" but that leads to the next problem: babel standalone doesn’t seem to compile the scripts imported by requirejs. I’m sure there’s a way, but I gave up on that approach.

What I really wanted was something that would just compile JSX files into JS, so I could write complex React Components and not worry about it.

I found some examples for using browserify and babel to basically do what a build system would do, but it ends up creating massive JS files with all kinds of transpiled code. Clearly I was doing this wrong.

I finally found that I can use a basic babel transformFile() call with some relatively simple configuration that mainly just compiles the React JSX into React JS with React.createElement() calls, and really no other change to my code.

I wrote a “watcher” script for this purpose, and now just keep it running in my IDE while I work:

 const args = require('minimist')(process.argv.slice(2));
 const fs = require("fs/promises");
 const path = require("path");
 const chokidar = require("chokidar");
 const babel = require("@babel/core");
 const options = {
   ignored: /(^|[\/\\])\../
 };
 async function compile(src) {
   return new Promise((resolve, reject) => {
     const info = path.parse(src);
     const dest = path.join(info.dir, `${info.name}.js`);
     babel.transformFile(src, {}, async (error, result) => {
       if (error) { return reject(error) }
       await fs.writeFile(dest, result.code);
       console.log(`compiled ${src} -> ${dest}`);
       resolve();
     });
   });
 }
 async function remove(src) {
   const info = path.parse(src);
   const dest = path.join(info.dir, `${info.name}.js`);
   fs.unlink(dest)
     .then(() => {
       console.log(`removed ${dest}`);
     })
     .catch(error => {
       // ignore
     });
 }
 (async () => {
   chokidar.watch('**/*.jsx', options).on('all', async (event, path) => {
     if (event === "add" || event === "change") {
       await compile(path);
     }
     if (event === "unlink" && args.delete) {
       await remove(path);
     }
   });
 })(); 

The babel configuration file (babel.config.json) looks like this:

{
   "presets": [
     [
       "@babel/env",
       {
         "targets": {
           "edge": "17",
           "firefox": "60",
           "chrome": "67",
           "safari": "11.1"
         },
         "useBuiltIns": "usage",
         "corejs": "3.6.5"
       }
     ],
     "@babel/preset-react"
   ],
   "plugins": ["@babel/plugin-syntax-class-properties"]
 }

For packages, I’ve installed these in my package.json file:

@babel/cli
@babel/core
@babel/preset-env
@babel/preset-react
chokidar minimist 

Jupyter Widgets: Sending Custom Event To Frontend From Backend

Jupyter Widgets can have a separate frontend and backend component. Sometimes, you need to send a message from one to the other. This example shows the basics on sending a message from the backend to the frontend.

In your Widget’s frontend (JavaScript) code, listen for the custom event from the backend:

    render: function() {
        this.model.on('msg:custom', this.customEvent.bind(this));
    },
    customEvent: function(event) {
        switch (event.type) {
            case 'my_important_event':
                console.log(event.value);
                break;
        }
    }

Now, in the Widget’s backend (Python) code, send the message as needed:

    def send_message_to_frontend(self):
        self.send({"method": "custom", "type": "my_important_event", "value": "blue"})

Jupyter Widgets: Sending Custom Event To Backend From Frontend

Jupyter Widgets can have a separate frontend and backend component. Sometimes, you need to send a message from one to the other. This example shows the basics on sending a message from the frontend to the backend.

In your Widget’s backend (Python) code, listen for the custom event from the frontend:

    def __init__(self, **kwargs):
        super().__init__(**kwargs)
        self.on_msg(self._handle_custom_msg)
    def _handle_custom_msg(self, content, buffers):
        if content['event'] == 'another_important_event':
            print(content['value'])

Now, in the Widget’s frontend (Javascript) code, send the message as needed:

    sendMessageToBackend: function() {
        this.model.send({event: 'another_important_event', value: 'green'});
    },

Jupyter Widgets: Finding My Cell Object

When writing a custom Jupyter Widget, sometimes you need to know the Cell (CodeCell, usually) that your Widget is running in. You can get a list of all Cells from the Notebook object in Javascript, but finding your Cell isn’t exactly that straight-forward.

Fortunately, it isn’t that hard to determine your Cell, since you can find the container element for your Widget, and then loop through all Cells in the Notebook to see which one you are in:

const container = this.$el.parents('.cell');
const cell = this.notebook.get_cells().find(it => it.element.is(container));

If all goes well, cell will contain your Cell object.

Parsing CSV and TSV Files

introduction

A very common file format for transferring data is the Comma (or TAB) Separated Value file. These formats appear simple, but have several complications that make implementing a CSV/TSV parser slightly more challenging than just breaking a row of text by its separator.

It’s probably best to start with some definitions before diving into any details. In this file format, which is a text file where each line of text is like a row in a spreadsheet, and each of those rows is made up of columns.

The generally accepted practice is to call the rows Records, and the columns Fields. The first line is typically included, but not required, and contains the Field names.

basic parsing

Fields are separated by their separator character, which is a comma for a CSV and a TAB for a TSV:

id,animal,sound
1,cat,Meow
2,dog,Bark
3,cow,Moo
EMBEDDED COMMAS

This works perfectly fine for a TSV, since the value in a Field probably wouldn’t contain a TAB character. But what about a CSV? It seems reasonable that the value in a Field could have one or more commas.

id,animal,sound,advantage,size
1,cat,Meow,warm, purring, fuzzy,small
2,dog,Bark,loyal, funny, protective,medium
3,cow,Moo,tippable, clears grass,large

There’s no reasonable way to parse the Fields in this file (note that Field names in the first row, if included, are not required to match the Record Fields). The solution is to require double quotes around a Field that contains a comma:

id,animal,sound,advantage,size
1,cat,Meow,"warm, purring, fuzzy",small
2,dog,Bark,"loyal, funny, protective",medium
3,cow,Moo,"tippable, clears grass",large

Now we can parse this file, since the double quotes indicate the beginning and end of a Field. If we see a starting double quote, we can just keep reading until we see a closing double quote.

embedded double quotes

But what about double quotes themselves? It’s certainly common to see double quotes in different situations in a Field. Text in the field could be quoted, or a double quote could be used to mean inches. If we used the above logic of using an opening and closing double quote to know when a Field begins and ends, something like this will cause trouble:

id,name,size
1,shoe,12
2,round clock,8"x8"
3,used laptop,just "like new!"

The answer is to “escape” the double quotes in some way, and the decision was to use double double quotes. However, this causes its own problem, if we’re using double quotes to determine if commas are part of a Field, or if they are the separator between Fields, how can we tell? The answer, again, is to require double quotes around a Field that contains a double quote (or, more accurately now, an escaped double quote):

id,name,size
1,shoe,12
2,round clock,"8""x8"""
3,used laptop,"just ""like new!"""

This use of escaping double quotes, and requiring double quotes around a Field that contains double quotes is common to both CSV and TSV.

embedded newlines or carriage returns

The final major thing to consider when parsing a CSV or TSV is that a Field could contain a newline or carriage return character:

id,name,description
1,shoe,these are fine shoes
2,round clock,features:
2 hands
round, 8"
maple wood
3,used laptop,technical specs:
2048 Neurobit 1098X3 CPU
12 Parsec pseudo-drive
15" screen

Since a newline (or carriage return) should indicate the end of the current Field and also the completion of the current Record, how do we deal with them as part of a Field? I’m sure you’ve guessed that double quotes are involved, and they are:

id,name,description
1,shoe,these are fine shoes
2,round clock,"features:
2 hands
round, 8""
maple wood"
3,used laptop,"technical specs:
2048 Neurobit 1098X3 CPU
12 Parsec pseudo-drive
15"" screen"

Notice that there are also escaped double quotes and commas in these Fields as well. The opening and closing double quotes allow for those in the Field.

other consideration

In general, this covers the major issues faced when parsing a CSV or TSV file. Note that a TSV can also allow for some special escaped characters:

characterescaped value
newline (\n character)\n as two characters
carriage return (\r character)\r as two characters
tab (\t character)\t as two characters
backslash\\ as two characters

Tomcat: Enabling SSL

Usually, when you get your SSL certificates, they are .crt, .key, and .ca-bundle files. These work fine for Apache’s HTTP server, but Apache’s Tomcat server needs these converted into a .jks (Java Key Store), and the Tomcat configuration set up to use that key store. To simplify the conversion, here is a shell script to perform the steps, under the assumption that the .crt, .key, and .ca-bundle files all have the same prefix.

#!/bin/sh
if [ "$1" = "" ]; then
  echo ""
  echo "  usage: $0 <file-prefix> <password>"
  echo ""
  echo "  This tool requires that all files have the same prefix, and the .crt, .key, and .ca-bundle files exist."
  echo ""
  echo "  For example, if your files are named example.com.crt, example.com.key, example.com.ca-bundle, you would do:"
  echo ""
  echo "    $0 example.com mySekretPasswd"
  echo ""
  exit 1
fi
echo ""
echo "  Generating JKS file for $1..."
echo ""
echo "----------------------------------------------------------"
openssl pkcs12 -export -in $1.crt -inkey $1.key -name $1 -out $1.p12 -passout pass:$2
keytool -importkeystore -deststorepass $2 -destkeystore $1.jks -srckeystore $1.p12 -srcstoretype PKCS12 -srcstorepass $2
keytool -import -alias bundle -trustcacerts -file $1.ca-bundle -keystore $1.jks -storepass $2
prefix_alias=`keytool -list -v -keystore $1.jks -storepass $2 | grep -i alias | grep $1`
if [ "$prefix_alias" = "" ]; then
  echo ""
  echo "  ** something seems to have gone wrong, $1 not found in aliases"
  echo ""
  exit 1
fi
echo "----------------------------------------------------------"
echo ""
echo "  JKS file created."
echo ""
echo "  Copy $1.jks to Tomcat's ssl directory, typically something like /etc/tomcat8/ssl/$1.jks"
echo ""
echo "  Add or Update the <Connector> entries in Tomcat's server.xml to be something like:"
echo ""
echo "    <Connector port=\"8443\" protocol=\"org.apache.coyote.http11.Http11NioProtocol\" maxThreads=\"150\" SSLEnabled=\"true\" scheme=\"https\" secure=\"true\" clientAuth=\"false\" sslProtocol=\"TLS\" keystoreFile=\"/etc/tomcat8/ssl/$1.jks\" keystoreType=\"JKS\" keystorePass=\"$2\" keyAlias=\"$1\" />"
echo "    <Connector port=\"8009\" protocol=\"AJP/1.3\" redirectPort=\"8443\" />"
echo "" 

An example of using the tool, if your certificate files all start with example.com:

./convert-for-tomcat.sh example.com mySekretPasswd