Skip to main content

Building a Blog Comment API with AWS Serverless

· 3 min read
ひかり
Main bloger

I wanted to add a comment section to this blog, so instead of using an off-the-shelf solution like Disqus or giscus, I built my own API on AWS serverless. Here's a look at the design and implementation.

Architecture

Requests flow through the following stack:

Browser (www.hikari-dev.com)
↓ HTTPS
API Gateway
├── GET /comment?postId=... → Fetch comments
├── POST /comment → Submit a comment
└── PATCH /comment/{id} → Admin (toggle visibility)

Lambda (Node.js 20 / arm64)

DynamoDB (comment storage)
+ SES v2 (admin email notifications)

The code is written in TypeScript and managed as IaC with SAM (Serverless Application Model). Lambda runs on arm64 (Graviton2) to shave a bit off the cost.

DynamoDB Table Design

The table is named blog-comments, with postId as the partition key and commentId as the sort key.

KeyTypeDescription
postIdStringPost identifier (e.g. /blog/2026/03/20/hime)
commentIdStringULID (lexicographically sortable by time)

Using ULID for the sort key means comments retrieved with QueryCommand are automatically returned in chronological order — which is why I chose ULID over UUID.

Spam Filtering

Before writing a comment to DynamoDB, the handler checks it against a keyword list defined in keywords.json.

If a keyword matches, the comment is saved with isHidden: true and isFlagged: "1", hiding it automatically. If nothing matches, it goes live immediately.

isFlagged is used as the key for a Sparse GSI. Comments that pass the filter don't get this attribute at all, which keeps unnecessary partitions from appearing in the index — good for both cost and efficiency. This is achieved simply by setting removeUndefinedValues: true on the DynamoDB Document Client.

export const ddb = DynamoDBDocumentClient.from(client, {
marshallOptions: {
removeUndefinedValues: true,
},
});

Admin Email Notifications

Every time a comment is submitted, SES v2 sends me an email containing the author name, body, rating, IP address, and flag status.

The email is sent asynchronously, and any failure is silently swallowed. This keeps the POST response time unaffected by email delivery.

sendCommentNotification(record).catch((err) => {
console.error("sendCommentNotification error:", err);
});

Privacy

IP addresses and User-Agent strings are stored in DynamoDB for moderation purposes, but they are never included in GET responses. This separation is enforced at the type level.

Security

LayerMeasure
NetworkAWS WAF rate limit: 100 req / 5 min / IP
CORSRestricted to https://www.hikari-dev.com
Admin APIAPI Gateway API key auth (X-Api-Key header)
SpamKeyword filter with automatic hiding

For the admin endpoint (PATCH /comment/{id}), setting ApiKeyRequired: true in the SAM template is all it takes to enable API key authentication — no need to implement a custom Lambda Authorizer.

Wrap-up

The serverless setup means no server management, and DynamoDB's on-demand billing keeps costs minimal for a low-traffic personal blog.

The whole thing is packaged with SAM + TypeScript + esbuild, and deploying is as simple as sam build && sam deploy.

Creating Hime — A VSCode Extension for Chatting with Multiple Generative AI Agents

· 2 min read
ひかり
Main bloger

I built a VSCode extension called Hime (HikariMessage) that lets you chat with multiple AI providers.

It follows a BYOK (Bring Your Own Key) model — you just need an API key from each provider you want to use.

What is Hime?

Hime is a generative AI chat extension that lives in the VSCode sidebar. It supports Anthropic, OpenAI, Azure OpenAI, OpenRouter, and Ollama, and lets you switch between providers easily via a dropdown menu.

Key Features

Multiple AI Provider Support

The following providers are supported:

  • Anthropic (Claude)
  • OpenAI
  • Azure OpenAI
  • OpenRouter
  • Ollama

Streaming Responses

Responses are displayed in real time, so even long answers feel snappy.

MCP

You can enable MCP by adding a JSON configuration in the settings like this:

Example

{
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "C:\\Users"]
}
}

Rich UI

  • Markdown rendering
  • Syntax highlighting for code blocks
  • Copy button for code blocks
  • MCP tool output display

Persistent Chat History

Conversation history is saved as JSON files under ~/.hime/chats/. You can pick up right where you left off even after restarting VSCode.

Automatic System Prompt

Workspace information, OS details, and the context of your currently open editor are automatically injected into the system prompt. Just say "fix this file" and the AI already knows what you're looking at.

Setup

Requires Node.js 20+ and VSCode 1.96+.

git clone https://github.com/Himeyama/hime
cd hime
npm install
npm run watch # Development: watches both Extension Host and Webview simultaneously

Then press F5 in VSCode to launch the extension host. API keys can be entered via the settings panel in the sidebar and are stored encrypted using VSCode's SecretStorage.

Wrapping Up

Hime's strength is that you can interact with AI without leaving your editor — and even delegate tool execution via MCP. Give it a try!

Repository: https://github.com/Himeyama/hime

I Built a Cloud Storage Service with AWS Serverless

· 3 min read
ひかり
Main bloger

Introduction

I wanted a personal file sharing system, so I built a file storage service using only AWS serverless services.

In this article, I'll walk through the key design decisions and the actual architecture I ended up with.

What I Built

The cloud storage service that lets you upload, download, and manage folders through a web browser.

Key Features

  • File upload / download
  • Folder creation and hierarchical management
  • Bulk ZIP download of multiple files / folders
  • User authentication (sign-up, login, password reset)
  • User profile management

Architecture

Here's the architecture diagram.

Most of the authentication is handled by Cognito. For file transfers, Lambda issues S3 Presigned URLs so the client communicates directly with S3.

Tech Stack

LayerTechnology
BackendC# (.NET 8) / AWS Lambda
AuthenticationAmazon Cognito + Managed Login v2
APIAPI Gateway (REST) + Cognito Authorizer
StorageAmazon S3

Design Decisions and Reasoning

Using Cognito for Authentication

I leveraged Cognito's OAuth 2.0 endpoints and Managed Login to implement authentication.

In the end, I only needed a single Lambda function for auth: TokenFunction.

In terms of both functionality and security, less code is better. There's no need to write what AWS services already do for you.

File Transfers via Presigned URLs

Routing file uploads and downloads through Lambda introduces several problems:

  • Hitting Lambda's payload size limit
  • Loading large files into Lambda memory is costly
  • Transfer time counts against Lambda execution time

With Presigned URLs, Lambda only issues the URL — the actual file transfer happens directly between the browser and S3.

Lambda execution time stays in the tens of milliseconds, and the file size limit extends all the way to S3's own limits.

Upload flow:
1. Browser → Lambda: "I want to upload file.pdf! Send me an upload URL."
2. Lambda → Browser: "Here's a Presigned URL. PUT your file here."
3. Browser → S3: "Sending PUT to S3."
4. Browser → Lambda: "Upload complete!"

ZIP Download for Folders

S3 doesn't have a built-in feature to download an entire folder.

For bulk downloads, I generate a ZIP file in Lambda, temporarily store it in S3, and return a Presigned URL for it.

The temporary ZIP file is automatically deleted after 1 day via an S3 lifecycle rule, so there's no garbage buildup.

Security

MeasureImplementation
Brute-force protectionCognito's built-in lockout (5 failures: 15-minute lock)
API protectionJWT verification via Cognito Authorizer
CORSAllowedOrigin restricted to a specific domain
Temporary file managementS3 lifecycle rule auto-deletes files after 1 day

Cost

With a serverless architecture, costs are nearly zero when not in use.

  • Cognito: ESSENTIALS Tier is free up to 10,000 MAU
  • Lambda: Free up to 1 million requests per month
  • S3: Pay-as-you-go based on storage used (~$0.025/GB per month)
  • API Gateway: $3.50 per 1 million requests

For personal use, monthly costs should land somewhere between a few cents and a couple of dollars.

Infrastructure as Code

The entire infrastructure is defined in a single template.yaml (AWS SAM).

Cognito User Pool, API Gateway, 3 Lambda functions, S3 bucket, CloudWatch alarms, SNS — all resources defined in roughly 600 lines of YAML.

Installing Rocky Linux 8.10 on WSL2

· 2 min read
ひかり
Main bloger

Download the WSL 2 Image

Download the Rocky Linux container image from the following URL:

https://dl.rockylinux.org/pub/rocky/8/images/x86_64/Rocky-8-Container-Base.latest.x86_64.tar.xz

Reference: https://docs.rockylinux.org/8/guides/interoperability/import_rocky_to_wsl/

Extract the Image

Extract the .tar.xz file and convert it into a .tar archive.
WSL2 can import .tar files directly.

cd ~/Downloads

xz -d Rocky-8-Container-Base.latest.x86_64.tar.xz

Note: The built‑in Windows bsdtar cannot extract this file.
If the xz command is not available, install it via Cygwin64 or use another WSL distribution.

Import the Image into WSL2

wsl --import RockyLinux-8.10 $HOME .\Rocky-8-Container-Base.latest.x86_64.tar --version 2

Verify the Imported Image

wsl -l -v

Add a User and Set the Default User

Example (Username: hikari)

wsl -d RockyLinux-8.10 -u root -- dnf install sudo passwd -y

wsl -d RockyLinux-8.10 -u root -- adduser hikari

wsl -d RockyLinux-8.10 -u root -- passwd -d hikari

wsl -d RockyLinux-8.10 -u root -- usermod -aG wheel hikari

wsl -d RockyLinux-8.10 -u root -- sed -i 's/^# %wheel/%wheel/' /etc/sudoers

wsl -d RockyLinux-8.10 -u root -- echo -e "[user]\ndefault=hikari" | tee -a /etc/wsl.conf

Launch the Image

wsl -d RockyLinux-8.10

Investigation into the Best Compression Method

· 5 min read
ひかり
Main bloger

Conclusion

To start with the conclusion.

I measured the time and size when compressing and extracting a large set of files totaling 34 GiB.

Compression / Archive Creation

MethodCommandrealusersysCompressed size
tartar cf large-pkg.tar large-pkg1m19.449s0m6.702s0m48.121s26 GiB
tar.gztar czf large-pkg.tar.gz large-pkg11m33.942s11m18.811s0m55.573s5.5 GiB
LZ4tar cf - large-pkg | lz4 > large-pkg.tar.lz43m33.187s0m53.958s2m57.122s8.6 GiB
zstdtar cf - large-pkg | zstd -T0 -o large-pkg.tar.zst9m13.819s1m45.049s2m43.609s4.8 GiB
bzip2tar cf - large-pkg.1 | bzip2 > large-pkg.tar.bz237m22.743s28m40.329s3m31.000s4.3 GiB
xztar cf - large-pkg | xz > large-pkg.tar.xz125m31.447s124m10.523s5m18.330s3.3 GiB

Extraction (Decompression)

MethodCommandrealusersys
tartar xf large-pkg.tar2m11.793s0m6.906s2m4.183s
tar.gztar xf large-pkg.tar.gz3m39.544s2m1.189s2m58.317s
tar.gz (gzip)gzip -dc large-pkg.tar.gz | tar xf -3m40.416s2m0.272s3m0.043s
tar.gz (pigz)pigz -dc large-pkg.tar.gz | tar xf -3m53.711s1m38.147s4m42.893s
LZ4lz4 -dc large-pkg.tar.lz4 | tar xf -4m46.576s0m32.174s4m36.055s
zstdzstd -dc large-pkg.tar.zst | tar xf -3m46.419s0m46.533s3m34.668s
bzip2bzip2 -dc large-pkg.tar.bz2 | tar xf -11m31.287s9m52.644s4m17.974s
xzxz -dc large-pkg.tar.xz | tar xf -8m11.527s3m45.562s7m15.109s

Preparing many small files

First, I decided to use node_modules as many small files.

I set package.json as below. The packages are arbitrary.

package.json
{
"name": "large-pkg",
"version": "1.0.0",
"description": "",
"author": "",
"type": "commonjs",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"dependencies": {
"async": "^3.2.6",
"axios": "^1.13.2",
"bcryptjs": "^3.0.3",
"bluebird": "^3.7.2",
"body-parser": "^2.2.2",
"chalk": "^5.6.2",
"chalk-template": "^1.1.2",
"cheerio": "^1.1.2",
"chokidar": "^5.0.0",
"commander": "^14.0.2",
"cookie": "^1.1.1",
"core-js": "^3.47.0",
"cors": "^2.8.5",
"debug": "^4.4.3",
"dotenv": "^17.2.3",
"express": "^5.2.1",
"fast-glob": "^3.3.3",
"form-data": "^4.0.5",
"glob": "^13.0.0",
"got": "^14.6.6",
"inquirer": "^13.2.0",
"jsonwebtoken": "^9.0.3",
"lodash": "^4.17.21",
"mime": "^4.1.0",
"minimist": "^1.2.8",
"mkdirp": "^3.0.1",
"mkdirp-classic": "^0.5.3",
"mongoose": "^9.1.4",
"ms": "^2.1.3",
"node-fetch": "^3.3.2",
"ora": "^9.0.0",
"passport": "^0.7.0",
"prop-types": "^15.8.1",
"qs": "^6.14.1",
"react": "^19.2.3",
"react-dom": "^19.2.3",
"request": "^2.88.2",
"rimraf": "^6.1.2",
"semver": "^7.7.3",
"sharp": "^0.34.5",
"socket.io": "^4.8.3",
"supports-color": "^10.2.2",
"tslib": "^2.8.1",
"uuid": "^13.0.0",
"ws": "^8.19.0",
"xml2js": "^0.6.2",
"yargs": "^18.0.0"
},
"devDependencies": {
"@babel/cli": "^7.28.6",
"@babel/plugin-transform-runtime": "^7.28.5",
"@babel/preset-env": "^7.28.6",
"@babel/runtime": "^7.28.6",
"autoprefixer": "^10.4.23",
"ava": "^6.4.1",
"babel-core": "^6.26.3",
"babel-loader": "^10.0.0",
"chai": "^6.2.2",
"commitlint": "^20.3.1",
"concurrently": "^9.2.1",
"conventional-changelog": "^7.1.1",
"cross-env": "^10.1.0",
"css-loader": "^7.1.2",
"dotenv-expand": "^12.0.3",
"eslint": "^8.57.1",
"eslint-config-prettier": "^10.1.8",
"eslint-config-standard": "^17.1.0",
"eslint-plugin-import": "^2.32.0",
"eslint-plugin-jsx-a11y": "^6.10.2",
"eslint-plugin-node": "^11.1.0",
"eslint-plugin-prettier": "^5.5.5",
"eslint-plugin-react": "^7.37.5",
"husky": "^9.1.7",
"jest": "^30.2.0",
"less": "^4.5.1",
"lint-staged": "^16.2.7",
"mocha": "^11.7.5",
"nodemon": "^3.1.11",
"playwright": "^1.57.0",
"pm2": "^6.0.14",
"postcss": "^8.5.6",
"prettier": "^3.8.0",
"puppeteer": "^24.35.0",
"rollup": "^4.55.1",
"rxjs": "^7.8.2",
"sass": "^1.97.2",
"semantic-release": "^25.0.2",
"sinon": "^21.0.1",
"style-loader": "^4.0.0",
"stylus": "^0.64.0",
"supertest": "^7.2.2",
"tailwindcss": "^4.1.18",
"ts-loader": "^9.5.4",
"ts-node": "^10.9.2",
"typescript": "^5.9.3",
"vite": "^7.3.1",
"vitest": "^4.0.17",
"webpack": "^5.104.1",
"webpack-cli": "^6.0.1",
"webpack-dev-server": "^5.2.3",
"zx": "^8.8.5"
}
}

From here, create node_modules with the following command.

npm i

Check the size.

$ du -h -d1
566M ./node_modules
567M .

This shows that many small files were created.

Copy node_modules to create even more files.

for i in {1..59}; do
echo "node_modules.${i} をコピー中..."
cp -r node_modules "node_modules.${i}"
done

Check the size.

$ du -h -d1
...
34G .

This becomes a fairly large size.

tarball

Let's see the speed of creating a tarball.

Archive

$ time tar cf large-pkg.tar large-pkg

real 1m19.449s
user 0m6.702s
sys 0m48.121s

After archiving: 26 GiB

Extraction

$ time tar xf large-pkg.tar

real 2m11.793s
user 0m6.906s
sys 2m4.183s

Archiving and extraction are reasonably fast.

tar.gz

Next, try tar.gz.

The tar command creates an archive, and gzip is a command to compress a single file. Combined, they create a tar.gz file. Nowadays the tar command alone can compress and extract tar.gz files. (The same applies to other compression formats.)

Compression

$ time tar czf large-pkg.tar.gz large-pkg

real 11m33.942s
user 11m18.811s
sys 0m55.573s

After compression: 5.5 GiB

Extraction

$ time tar xf large-pkg.tar.gz

real 3m39.544s
user 2m1.189s
sys 2m58.317s

Extracting with tar and gzip

$ time sh -c 'gzip -dc large-pkg.tar.gz | tar xf -'

real 3m40.416s
user 2m0.272s
sys 3m0.043s

Speed is almost the same.

Parallel extraction (pigz)

$ time sh -c 'pigz -dc large-pkg.tar.gz | tar xf -'

real 3m53.711s
user 1m38.147s
sys 4m42.893s

Not much change. It seems disk I/O is the bottleneck.

bzip2

Compression

$ time sh -c 'tar cf - large-pkg.1 | bzip2 > large-pkg.tar.bz2'

real 37m22.743s
user 28m40.329s
sys 3m31.000s

After compression: 4.3 GiB

Compression takes long, but the compression ratio is fairly good.

Extraction

$ time sh -c 'bzip2 -dc large-pkg.tar.bz2 | tar xf -'

real 11m31.287s
user 9m52.644s
sys 4m17.974s

Extraction also takes time, but the decompression ratio is fairly good.

xz

Compression

$ time sh -c 'tar cf - large-pkg | xz > large-pkg.tar.xz'

real 125m31.447s
user 124m10.523s
sys 5m18.330s

After compression: 3.3 GiB

Compression ratio is excellent, but it takes too long.

If your network is extremely slow, storage costs are high, and you use it only rarely (e.g., once every few years), it might be acceptable.

Extraction

$ time sh -c 'xz -dc large-pkg.tar.xz | tar xf -'

real 8m11.527s
user 3m45.562s
sys 7m15.109s

LZ4

Compression

$ time sh -c 'tar cf - large-pkg | lz4 > large-pkg.tar.lz4'

real 3m33.187s
user 0m53.958s
sys 2m57.122s

Extraction

$ time sh -c 'lz4 -dc large-pkg.tar.lz4 | tar xf -'

real 4m46.576s
user 0m32.174s
sys 4m36.055s

zstd

zstd is a fast compression method developed by Meta (formerly Facebook).

Compression

$ time sh -c 'tar cf - large-pkg | zstd -T0 -o large-pkg.tar.zst'
/*stdin*\ : 18.57% (27492075520 => 5106239218 bytes, large-pkg.tar.zst)

real 9m13.819s
user 1m45.049s
sys 2m43.609s

Extraction

$ time sh -c 'zstd -dc large-pkg.tar.zst | tar xf -'
large-pkg.tar.zst : 27492075520 bytes

real 3m46.419s
user 0m46.533s
sys 3m34.668s

Libertouch ES (JP) Review

· 2 min read

Libertouch ES

I got the Libertouch ES Japanese Layout (NC07902-B281-ES).

Libertouch ES

Libertouch ES

Here are my honest impressions after about one month of use.

Strengths

  • Typing experience
    Top-tier among membrane keyboards. Keys feel natural and have a light, mechanical-like touch without actually being mechanical. I hope they stick with membrane switches.
  • Durability
    Aluminum construction makes it extremely sturdy. Almost like a bludgeon.
  • Cherry MX compatible keycaps
    It's great that keycaps are replaceable.

Areas for Improvement

  • Unreliable input recognition
    Some keys don't respond when pressed, and it seems like multiple keys are swapping inputs. Likely a firmware or software issue.
  • Keycaps come off easily
    Due to structural design, spacebar and enter key detach frequently. Feels unstable.
  • Need replacement keycaps
    A key remapping tool is available, which is good, but replacement keycaps are needed. The left Windows key especially seems to have high demand. Ideally Home and End keys too.
  • Better cable options needed
    The included cable is USB-C to USB-C. Since most computers use USB-A, a USB-A to USB-C cable would be preferred.
  • Price is high
    At 80,000 yen as a prototype, the cost is understandable. If all issues were resolved and the price dropped to the 30,000 yen range, I'd buy it.

Overall Assessment

The Libertouch ES is a high-quality membrane keyboard with excellent tactile feedback and durability. However, it has practical concerns: unreliable input recognition, easily detachable keycaps, and cable compatibility issues. More replacement keycap and cable options would increase its appeal. Full-size or 80% layout options would be welcome. There's likely more demand for these than 65%.

Looking forward to improvements and release!

How to create certificates using mkcert on Raspberry Pi for cockpit and configure them on the server (cockpit) and browser

· 2 min read

Operating Environment

The following environment was confirmed for setup.

  • Raspberry Pi 5
  • AlmaLinux

Meaning of Each Certificate

  • raspberrypi.pem: Server certificate (public key) This certificate is an SSL certificate issued for the hostname raspberrypi. Clients such as web browsers use this certificate to verify the authenticity of the server. (This is the one to install on the server)
  • raspberrypi+1.pem: Server certificate (public key) This certificate is an SSL certificate issued for the hostname raspberrypi <IP address>. Same as above.
  • raspberrypi-key.pem: This is the private key corresponding to raspberrypi.pem. Keep it on the server and use it for SSL encryption/decryption. Never leak it to external parties. (This is the one to install on the server)
  • raspberrypi+1-key.pem: This is the private key corresponding to raspberrypi+1.pem. Keep it on the server and use it for SSL encryption/decryption. Never leak it to external parties. Same as above.
  • rootCA.pem: Local root certificate (public key) This is the certificate of the local CA (Certificate Authority) automatically generated by mkcert. Installing this certificate on the client (browser, etc.) allows raspberrypi.pem to be treated as a trusted certificate.
  • rootCA-key.pem: Private key of the local CA This is the private key corresponding to rootCA.pem, used by mkcert to sign server certificates (e.g., raspberrypi.pem). It is used internally by mkcert and usually does not need to be touched.

Certificate Issuance

Certificates are issued using the mkcert command. After issuance, they are placed on the server.

mkcert raspberrypi <IP address>

sudo cp raspberrypi+1-key.pem /etc/cockpit/ws-certs.d/raspberrypi.key
sudo cp raspberrypi+1.pem /etc/cockpit/ws-certs.d/raspberrypi.crt
sudo systemctl restart cockpit

Check the location of the local root certificate

Check the location of the root CA certificate to install on the PC.

mkcert -CAROOT

Copy the root certificate to the PC

Copy the root certificate to the PC.

scp raspberrypi:/home/<USER>/ .local/share/mkcert/rootCA.pem .
cp rootCA.pem rootCA.cer

Register the certificate on Windows

Open rootCA.cer and register the certificate in the certificate store under "Trusted Root Certification Authorities".

Register the certificate on Android terminals

Move rootCA.pem to the device and register it in the settings.

How to install OpenStreetMap with podman

· One min read

import

Download japan-xxx.osm.bpf to your home directory. Then, execute the following command to prepare. The :Z at the end of the volume option is for systems with SELinux enabled.

Do not change the part /data/rregion.osm.pbf.

podman volume create osm-data

podman run -v <downloaded osm.bpf file>:/data/rregion.osm.pbf:Z -v osm-data:/data/database/ overv/openstreetmap-tile-server import

import example

podman volume create osm-data

podman run -v <downloaded osm.bpf file>:/data/rregion.osm.pbf:Z -v osm-data:/data/database/ overv/openstreetmap-tile-server import

run

Execute the following command to run the tile server.

podman run -p 8080:80 -v osm-data:/data/database/ -v osm-tiles:/data/tiles/ -d overv/openstreetmap-tile-server run

If you configure the firewall, it will be accessible from the network.

Backup

podman volume export osm-data > osm-data.tar

Publishing a Website Using Raspberry Pi as a Server

· One min read

Setting up nginx on Raspberry Pi

# Install and enable nginx
sudo dnf install nginx

# Edit /etc/nginx/nginx.conf
# sudo nano /etc/nginx/nginx.conf

# Start and enable nginx
sudo systemctl start nginx
sudo systemctl enable nginx
sudo systemctl status nginx

Editing /etc/nginx/nginx.conf

Add the following inside http { server {} }:

location / {
return 200 'Hello, world!';
add_header Content-Type text/plain;
}

Cloudflare Settings

  1. Go to https://one.dash.cloudflare.com/.
  2. Open "Network" → "Tunnels".
  3. Click "Add a tunnel".

Cloudflare Tunnels

  1. Click "Select Cloudflared".

Select Cloudflared

  1. Enter a suitable name for "Tunnel name" and click "Save tunnel".

Save Tunnel Name for Cloudflare

Installing cloudflared

# Add cloudflared.repo to /etc/yum.repos.d/
curl -fsSl https://pkg.cloudflare.com/cloudflared-ascii.repo | sudo tee /etc/yum.repos.d/cloudflared.repo

sudo dnf clean packages

# Install cloudflared
sudo dnf install -y cloudflared --nogpgcheck

Starting cloudflared service

sudo cloudflared service install xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Routing traffic

Set the hostname's subdomain and domain, service type, and URL.

alt text

Click "Complete setup".

How to install Mozc on AlmaLinux 10 (Raspberry Pi 5 / GNOME / aarch64)

· One min read

Download the rpm files

Search and download the following from rpmfind.net:

  • mozc
  • mozc-gui-tools
  • ibuus-mozc

Make sure the architecture is correct.

  • For Raspberry Pi 5, it is aarch64
  • For general PCs, it is x86_64

Example of rpm files

  • mozc-2.31.5810.102-160000.1.2.aarch64.rpm
  • mozc-gui-tools-2.31.5810.102-160000.1.2.aarch64.rpm
  • ibuus-mozc-2.31.5810.102-160000.1.2.aarch64.rpm

Install

Specify the downloaded rpm files and install them using the dnf command.

cd ~/Downloads
sudo dnf install ./mozc-2.31.5810.102-160000.1.2.aarch64.rpm ./mozc-gui-tools-2.31.5810.102-160000.1.2.aarch64.rpm ./ibus-mozc-2.31.5810.102-160000.1.2.aarch64.rpm

Logout

Log out once.

Settings

Open "Settings" -> "Keyboard" and register the following in order:

  • Japanese (Mozc)
  • Japanese

Setup complete!