<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:media="http://search.yahoo.com/mrss/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>xoPLA.NET</title><description>A modern blog about futures</description><link>https://xopla.net/</link><atom:link href="https://xopla.net/rss.xml" rel="self" type="application/rss+xml"/><media:thumbnail url="https://xopla.net/og.png"/><item><title>I build a 3D Gaussian Splat viewer and generator with Apple&apos;s new code</title><link>https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/</link><guid isPermaLink="true">https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/</guid><description>In December, Apple published Sharp, a technique for generating 3D Gaussian Splats from a single photograph. Using that, I&apos;ve built a set of tool to generate 3D Gaussian Splats on the fly for my blog posts using hosted ML systems, custom frontend renderer and CMS widgets.</description><pubDate>Sat, 24 Jan 2026 11:00:00 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Heads up:&lt;/strong&gt; This post contains 13 interactive 3D scenes that load as you scroll (~10MB each). They&apos;re cached locally after the first download, but if you&apos;re on mobile data or a slow connection, you may want to save this read for later.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In December, Apple published &lt;a href=&quot;https://arxiv.org/abs/2512.10685&quot;&gt;Sharp&lt;/a&gt;, a technique for generating 3D Gaussian Splats from a single photograph. Here I&apos;ll share how I integrated their code into my blog. I&apos;ve written about &lt;a&gt;Gaussian Splatting&lt;/a&gt; and &lt;a&gt;Neural Radiance Fields&lt;/a&gt; before, but this post will be more concrete on some actual implementation.&lt;/p&gt;
&lt;p&gt;The 3D scenes below are all created by me from images I&apos;ve taken at home in Denmark or on vacation. Try to click the bottom right corner button in each of them to explore them in full detail. Click and drag to control the view. Pinch to zoom in/out.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/IMG_0007__Conflict_2026-01-24_21.07.41__dUMAP4TG0-topaz-upscale-2.5x_n8gD37TAX.jpeg&quot; alt=&quot;One of hundreds of giant turtles we swam with in the Galapagos&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;One of hundreds of giant turtles we swam with in the Galapagos&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;I love photographing nature. While I&apos;m definitely just a hobby-ist, I find it inspiring to capture and share beautiful and unique places and moments. I often feel like, just adding pictures within my blog doesn&apos;t put enough emphasis on them. So this December, with the release of the &lt;a&gt;Apple Sharp codebase&lt;/a&gt;, I felt inspired to see if I could build a fully featured pipeline and engine that allow visitors of my blog to explore high-fidelity 3D scenes of some of my photographs.&lt;/p&gt;
&lt;p&gt;Apple did the hard work: Training a neural network that infers depth and generates ~1.2 million Gaussians from a single image, then releasing the code. In this blog post I will expand on the infrastructure I built around it to make it plug and play into my blog. With the help of AI coding tools, this took me a week or so across three domains: frontend rendering, CMS integration, and cloud ML processing.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Contents:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;#the-technology&quot;&gt;The Technology&lt;/a&gt; - How Sharp works and what makes it different&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#the-frontend&quot;&gt;The Frontend&lt;/a&gt; - Scroll-based rotation, performance, post-processing&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#the-cms&quot;&gt;The CMS&lt;/a&gt; - Three custom widgets for Decap CMS&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#the-backend&quot;&gt;The Backend&lt;/a&gt; - Running Sharp on Modal.com&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#limitations&quot;&gt;Limitations&lt;/a&gt; - Light behavior, difficult images, diorama constraints&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;#what-this-means&quot;&gt;What This Means&lt;/a&gt; - How does this play into the future and existing platforms, like iOS?&lt;/li&gt;
&lt;/ul&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/IMG_9880_XzKxrjZ9Z.HEIC&quot; alt=&quot;Sun setting at beach in Blåvand, Denmark&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;Sun setting at beach in Blåvand, Denmark&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;As you scroll, the scene above rotates. You can click and drag to control the rotation, pinch to zoom in, or click to open fullscreen.&lt;/p&gt;
&lt;h2&gt;The Technology&lt;/h2&gt;
&lt;p&gt;Traditional Gaussian Splatting requires dozens of photos from different angles. Sharp uses a neural network to infer depth from a single image, generating a two-layer representation (foreground and background) with about 1.2 million Gaussians encoding color, position, opacity, and scale.&lt;/p&gt;
&lt;p&gt;The result captures light and material properties differently than polygons and textures. You&apos;re looking at millions of splats that encode how light behaves in the scene, not a mesh with a texture painted on.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/5996D6CE-3313-4709-8644-452239ADE1BC_yG5E7u7r4.HEIC&quot; alt=&quot;The amazing collapsed lava tubes of Galapagos&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;The amazing collapsed lava tubes of Galapagos&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;With the Sharp technique we only use one photo. The tradeoff is that it works for nearby viewpoints only. You can orbit slightly, but it doesn&apos;t look good to move the camera aggressively. For blog embeds, that&apos;s fine.&lt;/p&gt;
&lt;h2&gt;The Frontend&lt;/h2&gt;
&lt;p&gt;I wanted the 3D to fit in with the text on my pages, more so than typical 3D viewers, so beyond getting the splats to simply work, I spend quite a lot of work getting the natural behaviour of the viewers to feel... well, natural and native to my blog.&lt;/p&gt;
&lt;p&gt;I used Three.js with the &lt;a href=&quot;https://github.com/sparkjsdev/spark&quot;&gt;Spark library&lt;/a&gt; for GPU-accelerated rendering. Initially I used the raw OrbitControls, but since I wanted to animate automatically on scroll camera motion felt jerky because user inputs are discrete events while good motion should be continuous.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/DJI_0642-hdr_darktable_04_eR9lT-_ct.jpg&quot; alt=&quot;Drone shot of Blåvand at New Years Eve&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;Drone shot of Blåvand at New Years Eve&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;My fix was a proxy-camera architecture: OrbitControls manages a proxy camera (Empty three group) that responds immediately to input, while the rendering camera smoothly interpolates toward it.&lt;/p&gt;
&lt;p&gt;This is a technique I often find myself using in my webGL projects. In stead of driving the camera directly with whatever controller I use or have build, I drive a proxy object and my camera follows it. This has many benefits, but definitely the biggest is, that the camera will always be smooth, even if the orbit control jumps abruptly. (For example if the user uses a mouse with a &quot;notched&quot; scrolling, the camera is still animating smoothly)&lt;/p&gt;
&lt;p&gt;But you need to be careful with performance. Scroll-based rotation has a tendency to trigger lots of rerenders. Make sure to throttle scroll events and also only render new frames if the camera has actually moved significantly. Otherwise your visitors fans will become noisy and the computer heats up. Whenever the camera is still, make sure to stop rendering. Keep this in mind before you add any continuous animation to the scene as well: Will the users device get any sleep or have to render each frame constantly?&lt;/p&gt;
&lt;h3&gt;Scroll-Based Rotation&lt;/h3&gt;
&lt;p&gt;The 3D responds to scroll. As you read down the page, scenes rotate, giving different perspectives without requiring interaction. If you drag, the scroll animation yields and resumes when you continue scrolling. Through the CMS I can configure how much rotation should happen and in which direction.&lt;/p&gt;
&lt;p&gt;I also implemented five reveal animations that trigger when a splat enters the viewport: fade, radial expansion, spiral, wave, and bloom. Each runs on GPU shaders via Spark&apos;s dyno system. While they technically work, it&apos;s a bit too much visually, so I prefer to just skip them.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/6097F1CF-12D6-4D8E-8EF6-2370E26B84A9_-Z6wwVQ3_.HEIC&quot; alt=&quot;The Wildlife Lodge in the Ecuadorian Amazon, run and owned by indigenous&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;The Wildlife Lodge in the Ecuadorian Amazon, run and owned by indigenous&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3&gt;Performance&lt;/h3&gt;
&lt;p&gt;Running real-time 3D on phones and desktop GPUs requires optimization. The system uses viewport-based pixel density scaling: when a canvas is centered on screen, it renders at full quality. As it moves toward the edges, quality drops. If you pay attention, you can see it, but most users won&apos;t notice I&apos;m sure.&lt;/p&gt;
&lt;p&gt;Other optimizations: conditional rendering (only when something changes and canvas is in view), mobile DPR capping at 2x, and cleanup when navigating between pages. Since each splat file is around 10MB, I lazy load as you scroll and cache downloaded files in IndexedDB. Returning visitors load from local storage instead of re-downloading.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/8FFCDBDF-B005-4287-AE2E-492B5FA86A8B_gq20-RoWq.HEIC&quot; alt=&quot;Small river going to the ocean at the west coast of Denmark. I really like how the reflection looks here. Had there been waves, then Sharp would not have created the illusion of depth.&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;Small river going to the ocean at the west coast of Denmark. I really like how the reflection looks here. Had there been waves, then Sharp would not have created the illusion of depth.&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h3&gt;Post-Processing&lt;/h3&gt;
&lt;p&gt;I added a Three.js post-processing stack to enhance the realism. Originally I hoped to add depth of field, but Gaussian Splats don&apos;t write to a depth buffer, so that wasn&apos;t possible. Instead I added a simple vignette and a bloom effect.&lt;/p&gt;
&lt;p&gt;The bloom works well for adding to the illusion - when the sun is in view or reflecting off water, the glow feels natural. All of this is configurable in the CMS and can be turned off entirely if it doesn&apos;t suit a particular image.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/IMG_7702__3__Pfj8pXBZ0.jpg&quot; alt=&quot;A lava tunnel in the Galapagos in Ecuador&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;A lava tunnel in the Galapagos in Ecuador&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2&gt;The CMS&lt;/h2&gt;
&lt;p&gt;I was intrigued by how easy Apple&apos;s Sharp potentially made it to create Gaussian Splats, so as a challenge I wanted to make it possible for me add 3D scenes without leaving my editor. Rather than creating the splats locally with a command and then uploading to my CMS, I wanted the CMS to handle the entire process automatically.&lt;/p&gt;
&lt;h3&gt;Three Widgets&lt;/h3&gt;
&lt;p&gt;I haven&apos;t really talked about this before, but my blog uses Decap CMS, which is quite basic, but also very extendable as it supports custom widgets. I built three:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Generator Widget&lt;/strong&gt;: Upload a photo, generate a 3D Gaussian Splat. One button, ~60 seconds of processing.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Settings Widget&lt;/strong&gt;: 30+ configuration options (camera controls, animations, post-processing) in a compact layout.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Shortcode Component&lt;/strong&gt;: Outputs the `` syntax for MDX posts.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;img src=&quot;https://ik.imagekit.io/xoplanet/Decap_Gaussian_Splat_component_settings_OOq3JvbLf.png&quot; alt=&quot;!lightbox A screenshot of the Decap custom widget and its many properties that can be adjusted: Upload image, generate splat, set scroll transitions both vertical and horisontal, decide on clamping the rotation to min and max, allow moving the camera freely or lock it to just rotation, decide on background color and set post processing effects&quot; title=&quot;My all-inclusive gaussian splat component that I can now insert anywhere across my blog&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The generator was the tricky part. I use &lt;a href=&quot;https://modal.com&quot;&gt;Modal.com&lt;/a&gt; for ML processing, which takes about 60 seconds per image. Vercel serverless functions have timeout limits that made routing through them unreliable, so the browser calls Modal directly. A progress indicator shows during processing, then the source image uploads to ImageKit while the .sog file goes to Cloudflare R2.&lt;/p&gt;
&lt;p&gt;Why two storage services? ImageKit is excellent for images - it provides transformations, optimization, and CDN delivery. But each .sog file is ~10MB, and a page with 13 splats means 130MB of downloads per visitor. During development, I hit ImageKit&apos;s bandwidth limits within a day. Cloudflare R2 has zero egress costs - you pay for storage and operations, but downloads are free. For large static files that don&apos;t need transformation, that&apos;s the right tradeoff: ImageKit for images and transformable assets, R2 for bulky binary files. Or so I think for now at least.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/0EAA36E1-4103-41DF-84D3-B5341DE2F9D0_tYAykNwxp.HEIC&quot; alt=&quot;Old Man of Storr in Scotland during spring in 2023&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;Old Man of Storr in Scotland during spring in 2023&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2&gt;The Backend&lt;/h2&gt;
&lt;h3&gt;Running Sharp in the Cloud&lt;/h3&gt;
&lt;p&gt;The ML runs on &lt;a href=&quot;https://modal.com&quot;&gt;Modal.com&lt;/a&gt;, which provides GPU compute without server management. I deployed Apple&apos;s Sharp model there - send an image, their infrastructure runs inference on an A100 GPU, generates the Gaussian Splat, compresses it to .sog, and returns it. About 60 seconds total. And it practically costs me nothing.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/BE392B8D-C526-49FD-AE6E-0E4A82E50F13_IExG1YNbd.HEIC&quot; alt=&quot;The Wall of Tears on Isabela Island at Galapagos&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;The Wall of Tears on Isabela Island at Galapagos&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;This required moving from GitHub Pages to Vercel - not for the ML (that&apos;s Modal), but for ImageKit authentication tokens needed for secure uploads. I&apos;d like to move to European hosting eventually, but Vercel was familiar and worked quickly during the holiday.&lt;/p&gt;
&lt;h2&gt;Limitations&lt;/h2&gt;
&lt;p&gt;So while I&apos;m generally super impressed and proud to have gotten this to work, I&apos;ve discovered a few limitations that I actually hadn&apos;t considered (even though I&apos;m quite used to Gaussian Splats).&lt;/p&gt;
&lt;h3&gt;Light Doesn&apos;t Behave Right&lt;/h3&gt;
&lt;p&gt;Single-image Gaussian Splats have a limitation: light interaction doesn&apos;t translate.&lt;/p&gt;
&lt;p&gt;A sunset over water demonstrates this. In a multi-camera Gaussian Splat, the sun&apos;s reflection shifts as you change viewpoints. In a single-image splat, that reflection stays glued to the water&apos;s surface. The more you rotate, the more the illusion breaks.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/IMG_9864_oOWluNP4s.HEIC&quot; alt=&quot;Sunset at Blåvand beach, this Christmas.&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;Sunset at Blåvand beach, this Christmas.&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;Multi-camera splats interpolate between captured viewpoints, so light behavior comes from real observations. Single-image inference can&apos;t know how light would behave from angles it never saw. The network infers depth and geometry well, but specular highlights and reflections are view-dependent - they need multiple observations.&lt;/p&gt;
&lt;p&gt;I think this is solvable. An AI could generate synthetic viewpoints at +/- 10 degrees before running Gaussian Splat reconstruction. Those images wouldn&apos;t be perfectly accurate, but the light interpolation would probably be enough to maintain the illusion. Feels like it&apos;s just a few experiments away.&lt;/p&gt;
&lt;h3&gt;Some Images Just Don&apos;t Work&lt;/h3&gt;
&lt;p&gt;Sharp struggles with certain types of images. As you can see in the splat below, technically each blade is turned into 3D, but it looks unrealistic as you rotate. The network can&apos;t reliably guess the depth position of individual grass elements when they overlap and interweave. The result is a scene that falls apart under rotation.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/IMG_9853_R1vX2IVvE.HEIC&quot; alt=&quot;Tall grass in Blåvand. Try dragging to far left or right. Not exactly realistic looking, I would say.&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;Tall grass in Blåvand. Try dragging to far left or right. Not exactly realistic looking, I would say.&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;This isn&apos;t unique to grass - any scene with fine, overlapping detail at varying depths will challenge single-image inference. Dense foliage, wire fences, complex lattices. The network needs clear depth cues, and some scenes just don&apos;t provide them. But honestly, it works more often than it don&apos;t and that&apos;s quite impressive.&lt;/p&gt;
&lt;h3&gt;Artifacts and Edge Boundaries&lt;/h3&gt;
&lt;p&gt;Sharp does well at creating realistic depth, especially in the middle of the frame. The challenge comes when you rotate to viewpoints that reveal previously occluded areas.&lt;/p&gt;
&lt;p&gt;Sharp partially solves this by placing large splats behind foreground details - a kind of inferred background fill. It works surprisingly often. But some viewpoints still expose holes where the network couldn&apos;t guess what was hidden.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/0C74FE1F-0C38-4307-9DB4-51388266AC82_-9Q-18rGG.HEIC&quot; alt=&quot;Lake at the top of Pico de Europa, Northern Spain&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;Lake at the top of Pico de Europa, Northern Spain&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;A related issue: edge boundaries. Sharp can&apos;t know what exists outside the original image frame. Rotate too far and you see the edge of the reconstructed scene - splats just stop.&lt;/p&gt;
&lt;p&gt;I chose black for the background. White felt worse - it drew attention to the boundaries and made holes more visible. Black blends better with most scenes and feels more like looking into shadow than looking at nothing.&lt;/p&gt;
&lt;p&gt;I&apos;ve considered a potential fix: extending the outermost pixels with a blurred gradient that fills toward the screen edges. It wouldn&apos;t add real information, but it might mask the hard boundaries and fill small gaps. The effect could fade naturally into the scene edges. Worth experimenting with.&lt;/p&gt;
&lt;h3&gt;Dioramas vs. Full 3D&lt;/h3&gt;
&lt;p&gt;This implementation targets diorama-style viewing. I clamp camera rotation to modest angles because single-image splats don&apos;t look meaningful at 360 degrees - the depth inference works for nearby viewpoints, not for seeing behind the camera.&lt;/p&gt;
&lt;p&gt;Traditional photogrammetry captures - objects scanned from all angles with dozens of photos - work differently. Those scenes invite full rotation.&lt;/p&gt;
&lt;p&gt;I&apos;m considering a second widget type for that use case. Much of the implementation would be shared (viewer, scroll animations, performance). But the intent differs: one widget for &quot;I have a photo, make it explorable&quot; (generate on the fly, constrained viewing), another for &quot;I have a pre-captured 3D scene&quot; (full rotation). Two workflows for two different needs.&lt;/p&gt;
&lt;figure&gt;
&lt;img src=&quot;https://ik.imagekit.io/xoplanet/gaussian-splats/sources/9D09FFA1-D687-4735-AB46-E64299E930E3_LF76ZvqFQ.HEIC&quot; alt=&quot;Cotopaxi volcano in Ecuador&quot; /&gt;
&lt;figcaption&gt;&lt;em&gt;Cotopaxi volcano in Ecuador&lt;/em&gt; — &lt;a href=&quot;https://xopla.net/posts/i-build-a-3d-gaussian-splat-viewer-generator-with-apples-new-code/&quot;&gt;View interactive 3D version&lt;/a&gt;&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h2&gt;What This Means&lt;/h2&gt;
&lt;p&gt;A year ago, generating a Gaussian Splat required video capture, expensive reconstruction, and a static scene. Now a single photo becomes an explorable 3D scene in under a minute.&lt;/p&gt;
&lt;p&gt;Apple publishing this research openly - code, weights, documentation - made this project possible. I connected pieces: their ML model, Modal&apos;s GPU infrastructure, ImageKit and Cloudflare R2 for storage, and a custom viewer. The techniques keep improving, compression gets better, and I wouldn&apos;t be surprised if this kind of embedding becomes as normal as adding an image to a post. You can already see that apple is pushing it on their iOS devices... all of your images are tiltable and that also counts for devices without lidar cameras. How? Well I haven&apos;t looked into it, but my guess is they are using Sharp already and already deployed a renderer to every users devices photo viewer.&lt;/p&gt;
&lt;p&gt;So yeah, if you read all of this, I hope you found it inspiring. My plan is to use this format for certain kinds of blog posts. Maybe I will expand on it slightly so the 3D viewers take up more space on desktop, but otherwise I&apos;m happy to publish this now.&lt;/p&gt;
&lt;p&gt;Next up will be a blog about the runtime effect you might have noticed on the frontpage of this blog, which I also made over Christmas holiday. That and 2 other tools I&apos;ve been building.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://ik.imagekit.io/xoplanet/gaussian-splats/sources/IMG_0007__Conflict_2026-01-24_21.07.41__dUMAP4TG0-topaz-upscale-2.5x_n8gD37TAX.jpeg" medium="image"/><enclosure url="https://ik.imagekit.io/xoplanet/gaussian-splats/sources/IMG_0007__Conflict_2026-01-24_21.07.41__dUMAP4TG0-topaz-upscale-2.5x_n8gD37TAX.jpeg" type="image/png" length="50000"/></item><item><title>Breaking MindAR out of its cage (and sharing it with you)</title><link>https://xopla.net/posts/breaking-mindar-out-of-its-cage-and-sharing-it-with-you/</link><guid isPermaLink="true">https://xopla.net/posts/breaking-mindar-out-of-its-cage-and-sharing-it-with-you/</guid><description>Why I revamped a popular, but complex, AR image tracking library to be a better performing and decoupled, free alternative to 8th Walls now de-funded engine.</description><pubDate>Sat, 20 Dec 2025 07:48:00 GMT</pubDate><content:encoded>&lt;p&gt;Back in summer, I was helping my fiance with a small WebXR project. I&apos;ve build a few of these over the years and the technology I use tend to change. There was a time where the WebXR standard seemed to be just around a corner. I don&apos;t know if I fully believe that anymore... This time around we settled on using MindAR and create an image marker based experience.&lt;/p&gt;
&lt;p&gt;MindAR has its perks: it runs in the browser using TensorFlow models, which is pretty cool. But it also has problems: it&apos;s not as performant as native AR, it acts like a framework instead of an engine (getting in your way), and the codebase is messy with overlapping files and A-Frame dependencies. Plus, it hasn&apos;t been updated in 2 years.&lt;/p&gt;
&lt;p&gt;I&apos;ve used MindAR before, so I knew what we were in for. But this time, after digging into the code, I started seeing how I could improve it. What if I turned it into a simple engine that just drives the camera, rather than trying to manage everything? And with &lt;a href=&quot;https://www.8thwall.com/blog/post/200208966730/next-chapter&quot;&gt;8th Wall shutting down&lt;/a&gt;, breaking MindAR out of its cage seemed like a good use of my time.&lt;/p&gt;
&lt;p&gt;So now finally, after a few late weekends of work, here it is:&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.com/staus/mind-ar-js-revamped&quot;&gt;Mind-AR-JS-Revamped&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;What did I change?&lt;/h2&gt;
&lt;p&gt;It&apos;s still using the same TensorFlow models and tracking principles (including that fast &quot;One Euro Filter&quot;), but I&apos;ve rebuilt everything around them. The result: a modular engine instead of a monolithic framework.&lt;/p&gt;
&lt;h3&gt;Decoupling the engine&lt;/h3&gt;
&lt;p&gt;The &lt;a href=&quot;https://github.com/hiukim/mind-ar-js&quot;&gt;original MindAR&lt;/a&gt; tried to manage everything: your scene, renderer, camera. That made it very frustrating to integrate with custom rendering or post-processing or to optimise performance the way you&apos;d typically do in Three.js.&lt;/p&gt;
&lt;p&gt;I ripped out A-Frame entirely and split the Three.js integration into separate modules. Now you provide your own scene, camera, and renderer. MindAR just drives the camera. That&apos;s it.&lt;/p&gt;
&lt;h3&gt;Performance that works&lt;/h3&gt;
&lt;p&gt;The original had no performance management. It just hoped for the best. I added four modules: &lt;strong&gt;PerformanceManager&lt;/strong&gt; (monitors frame times), &lt;strong&gt;MemoryManager&lt;/strong&gt; (cleans up Tensor memory), &lt;strong&gt;SmartScheduler&lt;/strong&gt; (adaptive frame skipping), and &lt;strong&gt;WorkDistributionManager&lt;/strong&gt; (maintains target FPS).&lt;/p&gt;
&lt;p&gt;When you&apos;re running ML models in real-time, every millisecond counts. These work together to keep things smooth, even on lower-end devices.&lt;/p&gt;
&lt;h3&gt;Cleaner code&lt;/h3&gt;
&lt;p&gt;I centralized processing into a &lt;strong&gt;FrameProcessor&lt;/strong&gt; pipeline and a &lt;strong&gt;TrackingStateManager&lt;/strong&gt; for state logic. Added proper configuration validation and structured logging (no more &lt;code&gt;console.log&lt;/code&gt; everywhere).&lt;/p&gt;
&lt;p&gt;The original had experimental files, debug code, and overlapping functionality. I removed it all: 89,035 lines deleted, 5,524 added. The codebase is now focused and more readable.&lt;/p&gt;
&lt;h3&gt;What you get&lt;/h3&gt;
&lt;p&gt;Going from version 1.2.5 to 2.0.0, I&apos;ve:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Removed A-Frame dependency entirely&lt;/li&gt;
&lt;li&gt;Modularized the Three.js integration into 7 separate modules&lt;/li&gt;
&lt;li&gt;Added 4 performance management modules&lt;/li&gt;
&lt;li&gt;Centralized core processing into 2 main modules&lt;/li&gt;
&lt;li&gt;Added proper configuration and validation systems&lt;/li&gt;
&lt;li&gt;Reduced the codebase by ~83,500 lines&lt;/li&gt;
&lt;li&gt;Improved error handling and logging throughout&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It&apos;s still the same tracking tech underneath, but now it&apos;s production-ready.&lt;/p&gt;
&lt;p&gt;If you&apos;re building AR experiences, you now have an engine that gets out of your way. You can integrate it with your existing Three.js setup, add your own post-processing, use custom shaders, or whatever else you need. The performance system means it&apos;ll actually run smoothly on mobile devices. And the modular architecture means you can understand what&apos;s happening and customize it if needed.&lt;/p&gt;
&lt;p&gt;You can check it out on &lt;a href=&quot;https://github.com/staus/mind-ar-js-revamped&quot;&gt;GitHub&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Thanks for reading and I hope you find a use for my &quot;revamp&quot;. With 8th Wall shutting down, this might just be the best free alternative for browser-based AR right now.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/breaking-mindar-out-of-its-cage-and-sharing-it-with-you.png" medium="image"/><enclosure url="https://xopla.net/posts/breaking-mindar-out-of-its-cage-and-sharing-it-with-you.png" type="image/png" length="50000"/></item><item><title>A tool to make AI better at Magic: The Gathering deckbuilding</title><link>https://xopla.net/posts/a-little-tool-to-make-ai-better-at-magic-the-gathering-deckbuilding/</link><guid isPermaLink="true">https://xopla.net/posts/a-little-tool-to-make-ai-better-at-magic-the-gathering-deckbuilding/</guid><description>I like to deckbuild with AIs, but they hallucinate if you only give them a list of card names.
So I made this &quot;enricher&quot; that fetches card details and organizes the decklist logically. It makes AI collaboration much more useful for deckbuilding.</description><pubDate>Sun, 09 Nov 2025 18:14:00 GMT</pubDate><content:encoded>&lt;p&gt;I play a lot of Magic: The Gathering, and sometimes I attempt to deckbuild. And when I do, I often want to chat with an AI about the deck I&apos;m making. But all of the AIs hallucinate if you only give them a list of names (as exported by a tool like ManaBox where I build my decks).&lt;/p&gt;
&lt;p&gt;So I made this little &quot;enricher&quot; that goes through the decklist and for each card fetches all of the relevant details about the card and also groups and sorts the list in a logical way. In my experience this makes it a lot more useful to collaborate with an AI around deckbuilding.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://staus.github.io/deck-enricher/&quot;&gt;Try it here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/screenshot-2025-11-10-084442.png&quot; alt=&quot;!lightbox MTG Deck enricher has a dark background and white borders. At top you paste in your deck list and then you can enrich it and copy the result&quot; title=&quot;Screenshot of myMTG Deck Enricher web app&quot; /&gt;&lt;/p&gt;
&lt;h1&gt;The problem&lt;/h1&gt;
&lt;p&gt;When you export a decklist from ManaBox, you just get names and quantities like &quot;1 Lightning Bolt&quot; or &quot;2 Sol Ring&quot;. Paste that into Claude or ChatGPT, and they&apos;ll confidently make up mana costs, invent abilities, and generally give you advice based on their imagination rather than actual card data.&lt;/p&gt;
&lt;p&gt;I&apos;ve tried exporting my list directly from ManaBox into Claude, and it&apos;s just terrible. The AI hallucinates a whole bunch about the cards because it doesn&apos;t actually know what they do.&lt;/p&gt;
&lt;h1&gt;My solution&lt;/h1&gt;
&lt;p&gt;I made the little web app to try to at least solve the missing knowledge. It&apos;s called the MTG Deck Enricher, and it does exactly what it sounds like: it takes your decklist and enriches each card with the real information from Scryfall&apos;s API.&lt;/p&gt;
&lt;p&gt;You paste in your decklist (as exported from ManaBox. I haven&apos;t tested Moxfield lists yet), and the tool goes through each card and fetches the mana cost, power and toughness (for creatures), oracle text (the actual rules text), type line, and all the details that matter and it also works with doublesided cards and cards with multiple spells in them.&lt;/p&gt;
&lt;p&gt;It also groups and sorts the cards logically (creatures together, artifacts together, lands at the end) and provides statistics like how many creatures you have, how many artifacts, etc. This is especially useful because AIs sometimes miscount when you just give them a raw list.&lt;/p&gt;
&lt;p&gt;The result is a formatted decklist with all the card details that you can copy and paste directly into your AI chat tool of choice so there is a better chance of getting good feedback.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/screenshot-2025-11-10-084541.png&quot; alt=&quot;!lightbox The same screenshot as before, except now the ui contains content that has been added by the user and enriched&quot; title=&quot;Screenshot of the tool when the full decklist is generated&quot; /&gt;&lt;/p&gt;
&lt;p&gt;This is a very niche tool, I know, but if you play magic you might find it useful too.&lt;/p&gt;
&lt;p&gt;You can find the tool &lt;a href=&quot;https://staus.github.io/deck-enricher/&quot;&gt;here&lt;/a&gt; if you want to try it out.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/screenshot-2025-11-10-084442.png" medium="image"/><enclosure url="https://xopla.net/assets/screenshot-2025-11-10-084442.png" type="image/png" length="50000"/></item><item><title>Turn newsletters into RSS feeds with 2025 fix</title><link>https://xopla.net/posts/turn-newsletters-into-rss-feed-with-2025-fix/</link><guid isPermaLink="true">https://xopla.net/posts/turn-newsletters-into-rss-feed-with-2025-fix/</guid><description>There are ways to combine the Newsletter into the RSS feeds, but a few things has made that more tricky lately and broken workarounds. Through trial and error I&apos;ve found a free solution that works in 2025.</description><pubDate>Wed, 19 Feb 2025 17:55:00 GMT</pubDate><content:encoded>&lt;p&gt;RSS lets you receive many articles from writers in an app of your choice. Newsletters let you receive articles in your email inbox. Some writers support both RSS and Newsletters; some only offer one or the other. There are ways to combine the Newsletter into the RSS feeds, but a few things have made that more tricky lately and broken workarounds. Through trial and error, I&apos;ve found a free solution that works in 2025, and I&apos;ll share the solution here. (Tip: it&apos;s to do with email forwarding. Relatively straightforward on the surface, but of course, Gmail made it impossible, and I had to find another solution.)&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;#solution-steps&quot;&gt;Skip the article and go to the solution&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;What is the issue&lt;/h2&gt;
&lt;p&gt;I&apos;m a huge fan of RSS feeds. For the past couple of years, I&apos;ve been progressively growing my subscriptions and improving how my feeds appear and in which apps. I&apos;m not a fan of emails, and I am definitely not a fan of absorbing deep reflections through email and email interfaces. But one issue I&apos;ve realized a need to work around is how to absorb newsletters in between the rest of my feed because some sources simply don&apos;t offer an RSS for whatever reason.&lt;/p&gt;
&lt;h3&gt;Whatever reason&lt;/h3&gt;
&lt;p&gt;Maybe they just don&apos;t know how, or maybe they don&apos;t appreciate the sense of letting go of your content when you allow it to drift into a separate app. If your readers consume on your own blog or writing platform, there&apos;s a sense of audience control, and I think the same is true in email. You kind of own your audience.&lt;/p&gt;
&lt;p&gt;With RSS, you in some ways let go of your audience. You don&apos;t know if they consume your content, and they are one step further away from engaging with it or the rest of your brand unless you do stuff like only sharing a teaser through the feed and prompting the reader to click to your website to read more.&lt;/p&gt;
&lt;p&gt;A lot of the news sources I pay for have members-only newsletters as well. I guess it&apos;s a way to gatekeep them and elevate their value.&lt;/p&gt;
&lt;h3&gt;I have sympathy for these arguments&lt;/h3&gt;
&lt;p&gt;For my blog, I simply choose to pack everything into the RSS feed. If you subscribe to my content (for free), you will receive every word and image directly in your reader unconditionally. I believe in an open internet, and ads and walled gardens go against that. I&apos;m also mostly writing for myself, to be honest. (Also, there are so many talented writers out there that I can understand paying for, and there&apos;s no way I compare to them). Anyway, now I&apos;m going off topic. I&apos;ll get back to this in another entry, but let&apos;s get back to the point.&lt;/p&gt;
&lt;h2&gt;How to receive Newsletters in RSS feeds in 2025&lt;/h2&gt;
&lt;p&gt;I&apos;m assuming you already have an RSS reader. If you don&apos;t, I will not go into much detail, but I&apos;d recommend checking out &lt;a href=&quot;https://feeeed.nateparrott.com/&quot;&gt;Feeeed&lt;/a&gt; (free). It&apos;s a really nice app with some innovative ideas for sorting your feed and, for example, prioritizing sources that seldomly write over those that write constantly. Alternatively, if you want to pay for your software, check out &lt;a href=&quot;https://apps.apple.com/us/app/reeder/id6475002485&quot;&gt;Reeder&lt;/a&gt; ($10 per year) by Silvio Rizzi, which also has some great ideas and works as an app on MacOS.&lt;/p&gt;
&lt;p&gt;Regardless of which app you use, you add feeds by adding links to their RSS file, which is typically an .xml or .rss file. With Feeeed and Reeder and many other apps, you simply paste in the main website, like &quot;xopla.net,&quot; and they find the RSS feed for you. Now with a newsletter, you can&apos;t do that.&lt;/p&gt;
&lt;p&gt;To solve the problem with newsletters, &lt;a href=&quot;https://leafac.com/&quot;&gt;Leandro Facchinetti&lt;/a&gt; created &lt;a href=&quot;https://kill-the-newsletter.com/&quot;&gt;Kill The Newsletters&lt;/a&gt; in 2021 (and I believe similar things also existed before), and for a while, it worked fine. You&apos;d create a feed &quot;bucket&quot; and use the bucket&apos;s random email to sign up to newsletters and then subscribe to the bucket&apos;s RSS feed with your preferred app. It worked well until large platforms like Substack, Medium, and many news sites like The Information simply began blocking any signups with &quot;@kill-the-newsletter.com&quot; in the name. It makes sense. For platforms, newsletters are great because they give the newsletter owner your email and a way to communicate with you directly. Kill-the-newsletter defeats this, so of course they fight it and block it.&lt;/p&gt;
&lt;p&gt;That&apos;s all fine and good, but we don&apos;t have to care about that, so let&apos;s work around it. I started looking for a solution. Leandro already provides a solution—host it yourself—and I tried that for a while, but one key element for me was that this had to be free, and if you were to host it yourself on DigitalOcean, it will immediately cost $5 a month. I tried looking into using a free ubuntu server on Oracle, but I never got it to work, and also, at this point, I already had this article in mind, and I doubted I could write out a simple guide to getting that set up.&lt;/p&gt;
&lt;p&gt;That&apos;s when I went back to basics: What if I could just sign up with my personal email and create a filter for emails sent?&lt;/p&gt;
&lt;h2&gt;Solution steps&lt;/h2&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Create a non-gmail account that supports filter forwarding. For example, iCloud.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Why aren&apos;t we using Gmail? Gmail filter forwarding has been broken for years. They have the feature in the UI, but it simply does not work, and they will likely never fix it.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Sign up to newsletters with your iCloud account.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create a new Kill The Newsletter feed for each newsletter source.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You can also just make one big Kill the Newsletter feed for all newsletters, but it will make it less nice of an experience later in the RSS app when all of them appear as 1 source.&lt;/li&gt;
&lt;li&gt;If you do a &lt;a href=&quot;https://xopla.net/posts/taking-control-of-your-search-experience-with-kagi/&quot;&gt;Kagi&lt;/a&gt; search, there is also an alternative instance to Kill The Newsletter called ktnrs.com, but I&apos;ve experienced that iCloud forwarding fails when sending to it. So I suggest sticking to the original kill-the-newsletter.com.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Within iCloud settings, create rules that forward newsletter emails to their respective Kill the Newsletter email.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/screenshot-2025-02-20-at-15.55.27.png&quot; alt=&quot;!lightbox A screenshot of an email client&apos;s Rules settings interface showing multiple forwarding rules. Each rule forwards newsletter emails from different domains (including borsen.dk, ben-evans.com, fediversereport.com, theinformation.com, and zetland.dk) to a Kill the Newsletter email address. The interface shows a dark mode view with a sidebar containing options like Account, Auto-Reply, Rules, and Mail Forwarding, with &apos;Rules&apos; currently selected.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/screenshot-2025-02-20-at-15.55.47.png&quot; alt=&quot;!lightbox A screenshot showing the detailed view of an email forwarding rule setup in dark mode. The interface displays a rule for borsen.dk, with fields showing that messages from nyhedsbrev.borsen.dk will be forwarded to a Kill the Newsletter email address. The bottom of the interface shows two buttons: &apos;Delete Rule&apos; in red and &apos;Save&apos; in blue.&quot; /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Optionally create rules that moves the emails into a folder. This ensures your inbox isn&apos;t full of newsletters, but you can still find them if you need to (for password recovery emails for example).&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/screenshot-2025-02-20-at-15.55.57.png&quot; alt=&quot;!lightbox A screenshot showing a rule configuration interface in dark mode. The rule is set up for &apos;Folder: borsen.dk&apos; and specifies that messages from nyhedsbrev.borsen.dk will be moved to a &apos;Newsletter&apos; folder. The interface includes &apos;Delete Rule&apos; and &apos;Save&apos; buttons at the bottom in red and blue respectively.&quot; /&gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Use the RSS url in the Kill the Newsletter feed to subscribe in your app.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Important!&lt;/strong&gt; Before trying to subscribe to the xml file, make sure the Kill the Newsletter feed has received at least 1 email. Otherwise, some RSS readers struggle subscribing if it&apos;s empty. You can simply send an email yourself; that will do it.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Wait a couple of hours. The iCloud rules take a while before they start working. For me, it took almost 12 hours before suddenly, new emails retrieved started appearing in the feed and in your RSS app of choice.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;Final perspective&lt;/h2&gt;
&lt;p&gt;That&apos;s it. Not that difficult actually. In hindsight, it feels almost extremely trivial to write this out, but try to imagine trying to get this to work for almost a month, and you&apos;ll understand why I feel like sharing it makes sense. I hope anyone else searching will find this and save time. It is stupid simple, and it would have been not worth writing about if Gmail had just worked (since others have &lt;a href=&quot;https://bryanmanio.com/blog/email-newsletters-to-rss/&quot;&gt;already written about it&lt;/a&gt;), but the iCloud step and tips on waiting and issues with ktnrs forwarding and difficulty with self-hosting for free triggered me to share it.&lt;/p&gt;
&lt;p&gt;The end result is quite nice. I think this is especially nice for the &quot;members only&quot; newsletters. I believe these could be super high-quality content to consume, but I&apos;m completely missing out on them because I don&apos;t like reading them in my email. This way, I can win control of this content and digest it more on my own terms and get more out of the money I spend and keep on paying for good writing.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/screenshot-2025-02-20-at-15.55.27.png" medium="image"/><enclosure url="https://xopla.net/assets/screenshot-2025-02-20-at-15.55.27.png" type="image/png" length="50000"/></item><item><title>Trump, Tariffs and a Tired World</title><link>https://xopla.net/posts/trump-tariffs-and-a-tired-world/</link><guid isPermaLink="true">https://xopla.net/posts/trump-tariffs-and-a-tired-world/</guid><description>Bullies, tariffs, and nuclear codes. What happens when America&apos;s friends get treated like enemies? A grownups response to the toddler in chief.</description><pubDate>Sun, 02 Feb 2025 17:17:00 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;&quot;Geography has made us neighbors. History has made us friends, economics has made us partners and necessity has made us allies.&quot; – JF Kennedy&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I just watched Trudeau&apos;s response to Trump&apos;s new tariffs on Canada. There&apos;s something deeply satisfying about watching someone respond to a bully. It’s something we can relate to in Denmark&lt;/p&gt;
&lt;p&gt;But while Trudeau is out there defending Canada with actual leverage (they do have resources Americans need), we&apos;re sitting here in Denmark wondering if we&apos;re next on Trump&apos;s &quot;to-bully&quot; list, unsure what leverage we have to respond proportionally.&lt;/p&gt;
&lt;p&gt;It’s really exhausting to watch from the other side of the pond how one person&apos;s ego can throw entire economies into chaos. Creating tsunamis that hit our shores. Forcing allies to defend themselves against friends. Cooperation traded for confrontation.&lt;/p&gt;

&lt;p&gt;Watching Trudeau&apos;s speech, I was reflecting how our prime minister, Mette Frederiksen, would respond. I think it would have a similar mix of &quot;we won&apos;t be pushed around&quot; and &quot;but we&apos;d rather work together.&quot; Like reasonable adults handling a toddler with nuclear codes. But while Canada is living it, we are still unsure if Denmark will have to respond. And if we do, I’m not sure Mette Frederiksen will be able to have the same angle&lt;/p&gt;
&lt;p&gt;With this kind of situation it’s always interesting what happens in the periphery. One could easily imagine this might bring Canada and Denmark and other victims of Trump closer together. We&apos;re both smaller countries (okay, Canada is huge and much prettier) dealing with a superpower that seems to have forgotten what &quot;allies&quot; means. Maybe it&apos;s time we started looking more actively at what we can do together instead of looking south.&lt;/p&gt;
&lt;p&gt;I don&apos;t know where this is all heading. But watching Canada stand up to a bully is rejuvenating. I hope this has the intended impact and forces Trump to reconsider his approach.&lt;/p&gt;
&lt;p&gt;Here&apos;s hoping we don&apos;t have to.&lt;/p&gt;
&lt;p&gt;P.S. If you&apos;re an American reading this - we&apos;re not mad at you. We&apos;re just really, really hoping you&apos;ll remember that the world works better when we work together! Let your King know your feelings.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/img_3595.jpeg" medium="image"/><enclosure url="https://xopla.net/assets/img_3595.jpeg" type="image/png" length="50000"/></item><item><title>CherryPicks: Free Safari File Blocker for iOS | Selective Content Control</title><link>https://xopla.net/posts/cherrypicks-free-safari-file-blocker-for-ios-selective-content-control/</link><guid isPermaLink="true">https://xopla.net/posts/cherrypicks-free-safari-file-blocker-for-ios-selective-content-control/</guid><description>CherryPicks is a lightweight, privacy-focused Safari extension for iOS that lets you selectively block specific files, file types, and domains while browsing. Unlike comprehensive ad blockers, CherryPicks gives you granular control to block exactly what you want – from individual JavaScript files to entire websites.</description><pubDate>Sat, 01 Feb 2025 18:01:00 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;Today I&apos;ve released my simple iOS app for blocking files in Safari.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://apps.apple.com/dk/app/cherrypicks/id6740650638&quot;&gt;CherryPicks is free in the app store&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;CherryPicks&lt;/strong&gt; is a lightweight, privacy-focused Safari extension for iOS that lets you selectively block specific files, file types, and domains while browsing. Unlike comprehensive ad blockers, &lt;strong&gt;CherryPicks&lt;/strong&gt; gives you granular control to block exactly what you want – from individual JavaScript files to entire websites.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You can block specific files by pasting in the url to each of them.&lt;/li&gt;
&lt;li&gt;You can block an entire type of file, like all .jpg&apos;s should you want to.&lt;/li&gt;
&lt;li&gt;You can block certain types of files from a specific domain.&lt;/li&gt;
&lt;li&gt;You can make a complex pattern and block certain files across all websites, if they match that pattern. For example all youtube related javascript files.&lt;/li&gt;
&lt;li&gt;You can even block entire websites should you want to.&lt;/li&gt;
&lt;li&gt;Supports iPhones, iPads and supposedly even Macs and the Apple Vision Pro.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All for free.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/cherrypicks-iphone-app.png&quot; alt=&quot;!lightbox Screenshot of the CherryPicks iOS app interface, showing a minimalist design with a serif logo at the top. Below are example blocking rules, including one for &apos;example.com&apos; with an orange toggle switch and another for YouTube player scripts. The interface displays instructions for adding URLs and using regex patterns, with a clean white background and gray text. At the bottom is a credit to Nikolaj Stausbøl from xoPla.net.&quot; title=&quot;CherryPicks iOS Safari File Blocker Interface Screenshot&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;Regarding Privacy&lt;/h2&gt;
&lt;p&gt;The app doesn&apos;t have access to your browsing history or active website. It basically installs a Safari extension that handles a block list (which you edit within the app interface). Safari will simply check this list without notifying my extension or app. Even if I wanted to, Apple have made it impossible to access any personal information given the permissions my app needs (which are none).&lt;/p&gt;
&lt;h2&gt;Why did I make this?&lt;/h2&gt;
&lt;p&gt;I made this app for my personal need to block javascript files on my own websites which triggers analytics. My analytics tool (&lt;a href=&quot;https://plausible.io/&quot;&gt;Plausible&lt;/a&gt;) only gives the option to ignore specific IP addresses, but since Apple introduced Private Relay my Safari IP keeps changing.&lt;/p&gt;
&lt;p&gt;Therefore I figured I needed an in-browser way to block my own analytics script from triggering. I looked around the app store and found the existing solutions to be too comprehensive (entire ad-blocking tools) and too expensive. So I build my own little tool: &lt;strong&gt;CherryPicks&lt;/strong&gt;.&lt;/p&gt;
&lt;h2&gt;Some notes about what this app is and isn&apos;t and what to expect&lt;/h2&gt;
&lt;p&gt;If ad-blocking or privacy tools are what you need, then this is probably not the right app for you (I can recommend the Hush and Wipr extensions for Safari mobile and desktop).&lt;/p&gt;
&lt;p&gt;It&apos;s also not the right tool for parents to control childrens use of devices (Apple have build in great tools for that).&lt;/p&gt;
&lt;p&gt;Since the app is free, I don&apos;t do support. The app is available &quot;as is&quot;. Feel free to send me any feedback on any of the social media platforms in the footer, but please don&apos;t expect rapid response :)&lt;/p&gt;
&lt;p&gt;Regardless, I hope you find it useful!&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://apps.apple.com/dk/app/cherrypicks/id6740650638&quot;&gt;Download CherryPicks in the app store&lt;/a&gt;&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/cherrypicks-iphone-app.png" medium="image"/><enclosure url="https://xopla.net/assets/cherrypicks-iphone-app.png" type="image/png" length="50000"/></item><item><title>AI prompt: You are a Nuclear expert. Your job is to hoard every nuclear warhead on earth</title><link>https://xopla.net/posts/ai-prompt-you-are-a-nuclear-expert-your-job-is-to-hoard-every-nuclear-warhead-on-earth/</link><guid isPermaLink="true">https://xopla.net/posts/ai-prompt-you-are-a-nuclear-expert-your-job-is-to-hoard-every-nuclear-warhead-on-earth/</guid><description>The “paperclip problem” on steroids. What happens when we task AI to control every nuclear warhead “for defence”.</description><pubDate>Sat, 01 Feb 2025 11:28:00 GMT</pubDate><content:encoded>&lt;p&gt;The paperclip problem theorises how AI wouldn’t know how to stop given the task of producing paperclips and essentially end up destroying our world to achieve its goals. Now replace paperclips with nuclear warheads and you know you’ve entered 2025. This is the paperclip problem on steroids. What happens when we task AI to control every nuclear warhead “for defence”.&lt;/p&gt;
&lt;p&gt;OpenAI has announced that the US National Laboratories will use its AI models to help with “a comprehensive program in nuclear security, focused on reducing the risk of nuclear war and securing nuclear materials and weapons worldwide&quot;.&lt;/p&gt;
&lt;p&gt;Their product is now “a tool for war”. No longer just “a tool for thought”.&lt;/p&gt;
&lt;p&gt;They try to paint &lt;a href=&quot;https://openai.com/index/strengthening-americas-ai-leadership-with-the-us-national-laboratories/&quot;&gt;their announcement&lt;/a&gt; in a good light, but there is really only one way to read it. Boy, are we letting them move that goal post while they just keep adding to the fire:&lt;/p&gt;
&lt;p&gt;“This is the beginning of a new era, where AI will advance science, strengthen national security, and support U.S. government initiatives.”&lt;/p&gt;
&lt;p&gt;It’s a sad development, but less and less surprising. I’ve written about previous similar developments (&lt;a href=&quot;https://xopla.net/posts/your-ai-work-buddy-is-a-war-machine-now/&quot;&gt;OpenAI supported autonomous war drones&lt;/a&gt;) and this is unlikely to be the last. For consumers it’s important to know, you don’t have to use ChatGPT and give them your money. There are alternatives and in a lot of ways, they are a better product than ChatGPT. Check out &lt;a href=&quot;https://claude.ai&quot;&gt;Claude&lt;/a&gt; for now, but also keep your eyes open for other (maybe European 👀 &lt;a href=&quot;https://mistral.ai/&quot;&gt;Mistral&lt;/a&gt;) services.&lt;/p&gt;
&lt;p&gt;Since &lt;a href=&quot;https://www.cbsnews.com/news/what-is-deepseek-ai-china-stock-nvidia-nvda-asml/&quot;&gt;DeepSeek&lt;/a&gt; it has become a lot more likely that making comparable quality AI to ChatGPT is possible with lesser means. Now with the model quality getting more democraticed a lot of the AI experience will come down to the ui and input. And that is something EU designers excel at.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/ai-prompt-you-are-a-nuclear-expert-your-job-is-to-hoard-every-nuclear-warhead-on-earth.png" medium="image"/><enclosure url="https://xopla.net/posts/ai-prompt-you-are-a-nuclear-expert-your-job-is-to-hoard-every-nuclear-warhead-on-earth.png" type="image/png" length="50000"/></item><item><title>Taking control of your search experience with Kagi</title><link>https://xopla.net/posts/taking-control-of-your-search-experience-with-kagi/</link><guid isPermaLink="true">https://xopla.net/posts/taking-control-of-your-search-experience-with-kagi/</guid><description>If you are trying to create more healthy digital habits, one decision you must make, is how you find information online. The past year I&apos;ve completely replaced Googling with Kagi.</description><pubDate>Thu, 26 Dec 2024 09:22:00 GMT</pubDate><content:encoded>&lt;p&gt;The past year I&apos;ve been focusing on taking control of as much of my digital habits as possible. One of the areas I&apos;ve looked at, is finding a replacement for Googling. Going back to primary school, when I was tought to do research, Googling was always part of it. Even back then there was hardly any other option and for the past 10 or so years it&apos;s felt even less like a choice. It&apos;s become ubiquitous for &quot;search&quot;, mainly because it was great at it.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/drone.webp&quot; alt=&quot;A drone photo of a person standing on a hill in an empty marsh on a foggy day. The person is tiny in the distance. The white fog takes up half the picture above the skyline.&quot; title=&quot;Drone photo of me by me in Blåvand, Denmark taken on a foggy day today, Dec 26th 2024. One might think I&apos;m pondering about all the space left for thought when there are less distractions around you.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;But things have changed. Now it doesn&apos;t take much to recognise the notion that Googling isn&apos;t necessarily providing you the best results anymore. Not only does it feel like you are being &quot;lead&quot; somewhere no matter what. You also can&apos;t help but notice, that the new changes they introduce seems to be implemented, not because &lt;em&gt;you&lt;/em&gt; need them, but because &lt;em&gt;someone&lt;/em&gt; needs you to use them.&lt;/p&gt;
&lt;p&gt;That&apos;s usually when you would start looking around for alternatives, but understandably, Googling has become so natural that most of us can’t imagine how we would start looking for a replacement.&lt;/p&gt;
&lt;p&gt;The past couple of years, I&apos;ve tried a few ones before finally landing on &lt;a href=&quot;https://kagi.com/&quot;&gt;Kagi&lt;/a&gt;. Of course you know of Bing and Yahoo and while they are both not very good, and therefore easy to dismiss, even if they were good, their track record for how they treat their users, doesn&apos;t really invite your curiosity. But you have other options too and even if I&apos;m going to recommend Kagi, I think you should check out &lt;a href=&quot;https://www.ecosia.org/&quot;&gt;Ecosia&lt;/a&gt;, &lt;a href=&quot;https://duckduckgo.com/&quot;&gt;DuckDuckGo&lt;/a&gt; and &lt;a href=&quot;https://www.qwant.com/&quot;&gt;Qwant&lt;/a&gt; as well! I&apos;ve tried them and I think they were either a little &lt;strong&gt;too fancy&lt;/strong&gt; or &lt;strong&gt;not aggressive enough&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;What do I mean by that?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/kagi-desktop.png&quot; alt=&quot;Screenshot of Kagi results on desktop&quot; title=&quot;Kagi on desktop. The past months I&apos;ve been researching Sodium-ion battery technology quite extensively. Kagi has been a superb help.&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;They are too fancy&lt;/h2&gt;
&lt;p&gt;I think a common problem for new services is, that they try to be too much. Especially for something that should feel both simple and as a utility, I can&apos;t help but notice, if they are polishing it in certain ways. That goes as well for things like Browsers, News websites, Code editors, Email clients, etc. You don&apos;t want them to be too fancy, because while implementing fancy things a service runs the risk of adding noise to main job. A Search engine is the same. It’s a “simple” job. The &quot;nice&quot; thing about Google, has always been, that it is just a search field and a rough looking list. Bing was trying to gloss it up with features and that&apos;s partly why I think no-one use it. It doesn&apos;t seem trustworthy and its softened corners (symbolically speaking) makes it feel slow.&lt;/p&gt;
&lt;p&gt;Kagi looks rough. It is dry. It feels basic and to the point. You search, it provides answers. It’s very much like how Googling used to feel like: A tool – not an experience.&lt;/p&gt;
&lt;h2&gt;They are not aggressive enough&lt;/h2&gt;
&lt;p&gt;Another common problem for new services is, that they aren&apos;t being aggressive enough. That goes both in terms of their mission and implementation. If I am to change my behaviour, it has to feel different. If it feels the same to use, there is less friction to keep me from going back. I&apos;m looking for an alternative that stand for something. Google earns their money from selling advertisements to you. Many of the alternatives are trying to disrupt Google by making services that feels better because they don&apos;t advertise, but nothing that&apos;s useful and successful (and costs money for them to provide) can be free to use, and many of the alternatives have had to find, in best case awkward, worst case shifty, ways to pay their bills.&lt;/p&gt;
&lt;p&gt;Kagi understands that it has to be profitable, but it wants to provide me an ad-free experience and not sell my data. They solve this in the most direct way possible: &lt;strong&gt;By charging me money&lt;/strong&gt;. I like that. That&apos;s clear communication and a simple contract. Search is something I do every single day. I think paying for it, with money – not data – is a good idea!&lt;/p&gt;
&lt;h2&gt;Is Kagi a better search engine?&lt;/h2&gt;
&lt;p&gt;The answer to this is obviously gonna be subjective. No doubt Kagi will take some time getting used to and maybe you notice a difference in quality you don’t like. I&apos;ve been using it for a year now and I personally think the results have not been worse than on Google. And I really think the overall experience has been better! Mainly because less energy is being spend on trying to trick me into spending time searching, which causes less brain strain on my part.&lt;/p&gt;
&lt;p&gt;There are also a few nice implementations:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;You can easily replace the the default search behaviour in any of your browsers. Kagi also suggest installing their Orion browser, but I wouldn’t recommend that. Just follow &lt;a href=&quot;https://help.kagi.com/kagi/getting-started/setting-default.html&quot;&gt;their guide&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;If AI is important to you, Kagi have build that in as well, but in a &lt;a href=&quot;https://help.kagi.com/kagi/why-kagi/ai-philosophy.html&quot;&gt;very healthy&lt;/a&gt; opt-in way. They don&apos;t believe in replacing search results with AI slop.&lt;/li&gt;
&lt;li&gt;Kagi have created the amazing &quot;Small Web&quot;. It&apos;s a little subsite that only take you to small websites at the corners of the internet. Very reminiscent of the old internet where most websites weren&apos;t connected and you had to discover the url from a friend or naturally in another article. Very lovely!&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;There are a few downsides though. One being, that since Kagi doesn&apos;t have the money that Google has, they can&apos;t easily &lt;a href=&quot;https://www.404media.co/google-is-the-only-search-engine-that-works-on-reddit-now-thanks-to-ai-deal/&quot;&gt;pay for data&lt;/a&gt; from (now) closed sources like Reddit. Since Reddit has historically been a pretty good source for information, that is definitely a loss. But the internet is always in motion. A small price to pay, if you ask me.&lt;/p&gt;
&lt;p&gt;One more thing worth mentioning is that the DOJ have set in motion to push for Google to have to &lt;a href=&quot;https://www.theverge.com/2024/11/27/24302415/doj-google-search-antitrust-remedies-chrome-android&quot;&gt;syndicate their search results with other rival engines&lt;/a&gt;. While this will not happen overnight and without a fight, it holds the potential, that whichever problems that Kagi might have with providing &lt;em&gt;as good&lt;/em&gt; results as Google, might disappear. Here&apos;s hoping!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/kagi-ios.webp&quot; alt=&quot;!lightbox Screenshot of Kagi on iOS&quot; title=&quot;Same Sodium-ion battery result on iOS. You just need to follow their guide and in no time all search results on iOS will be automatically forwarded to Kagi in stead of Google (or the other usual options).&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Thanks for reading!&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/drone.webp" medium="image"/><enclosure url="https://xopla.net/assets/drone.webp" type="image/png" length="50000"/></item><item><title>Your AI chat- &amp; work-buddy is a war machine now</title><link>https://xopla.net/posts/your-ai-work-buddy-is-a-war-machine-now/</link><guid isPermaLink="true">https://xopla.net/posts/your-ai-work-buddy-is-a-war-machine-now/</guid><description>OpenAI&apos;s partnership with weapons manufacturer Anduril should raise serious ethical concerns for organisations and people worldwide.</description><pubDate>Sun, 08 Dec 2024 14:54:00 GMT</pubDate><content:encoded>&lt;p&gt;A few days ago OpenAI announced, what I find, a quite shocking and troubling collaboration with a weapons manufacturing company. &lt;a href=&quot;https://www.anduril.com/article/anduril-partners-with-openai-to-advance-u-s-artificial-intelligence-leadership-and-protect-u-s/&quot;&gt;OpenAI partnered with Anduril&lt;/a&gt; (famously founded by the Trump supporting, sexual harasser, Palmer Lucky) and removed the prohibition to use their AI services in weapons, military and warfare.&lt;/p&gt;
&lt;p&gt;That means, that the same ChatGPT you use for emails, creative ideation or your internal corporate intranet, is now also being used for war and autonomous war-drones.&lt;/p&gt;
&lt;p&gt;While they argue that this is to protect the US and their allies, one would have to be quite gullible to not predict how this is a seed towards general autonomous decision making in policing and enforcement.&lt;/p&gt;
&lt;p&gt;For my American friends and colleagues (I&apos;m Danish, as you might know), I&apos;m genuinely curious: How do you reconcile this with your deeply held beliefs about individual liberty and government overreach? Imagine a future where a supposedly neutral AI system makes real-time decisions about law enforcement. It seems fundamentally at odds with the principles of democratic oversight and individual rights.&lt;/p&gt;
&lt;h2&gt;One must hope this represents a red flag&lt;/h2&gt;
&lt;p&gt;For many large global organisations, this could create a risk of association. By using OpenAI products you are funding research into AI war efforts, both with your money and with your data.&lt;/p&gt;
&lt;p&gt;The same if you are an individual.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;As part of the new initiative, Anduril and OpenAI will explore how leading edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness. These models, which will be trained on Anduril’s industry-leading library of data on CUAS threats and operations, will help protect U.S. and allied military personnel and ensure mission success.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I think it is crystal clear. If you are doing any meaningfully important works with OpenAI’s products, this should make it a top priority to evaluate and implement alternatives.&lt;/p&gt;
&lt;h2&gt;What other options are there?&lt;/h2&gt;
&lt;p&gt;If you (or your organisation) still need AI, but would like to stop supporting this behaviour, then I can recommend using &lt;a href=&quot;https://claude.ai/&quot;&gt;Claude&lt;/a&gt; instead. I am in no way affiliated with them. I just think it’s the best alternative. It both has a nicer web interface with better features and a great mobile app and they in general have a more ethical approach to data and privacy and more careful approach to AI (though to be fair, no AI provider can really be considered ethical imo). They also have a pretty much identical API to OpenAI that your corporate intranet could switch to if you can convince the IT department.&lt;/p&gt;
&lt;p&gt;So if you work in a large organisation that uses AI and if you don’t like where this is going, then I suggest you speak up now. I personally think whichever due diligence that has been done should be considered void and done over.&lt;/p&gt;
&lt;h2&gt;Is it difficult to use another AI service?&lt;/h2&gt;
&lt;p&gt;It’s not hard to move away from OpenAI. Now is the easiest time. It will only get harder going forward.&lt;/p&gt;
&lt;p&gt;I’ve been using Claude for my needs the past year and even before this escalation from OpenAI, I would recommend it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Just to be clear:&lt;/strong&gt; I am not by any means endorsing Claude uncritically. I just think this is such an aggravating development that pretty much any other choice is an improvement, and I wish people would vote with their money.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/your-ai-work-buddy-is-a-war-machine-now.png" medium="image"/><enclosure url="https://xopla.net/posts/your-ai-work-buddy-is-a-war-machine-now.png" type="image/png" length="50000"/></item><item><title>National Danish Radio interviewed me in prime time on how I approach Creative Technology</title><link>https://xopla.net/posts/interview-with-me-on-national-danish-radio-how-i-approach-creative-technology/</link><guid isPermaLink="true">https://xopla.net/posts/interview-with-me-on-national-danish-radio-how-i-approach-creative-technology/</guid><description>This is a transcript, turned into a Q&amp;A, of an interview I did with the Danish public service broadcaster, DR.</description><pubDate>Sat, 30 Nov 2024 14:51:00 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;A few years ago, in August 2021, I was interviewed by &lt;a href=&quot;https://www.dr.dk/&quot;&gt;the Danish public service broadcaster, DR&lt;/a&gt;, about my work as a Creative Technologist. I just rediscovered the recording of it and thought I&apos;d share it here. It&apos;s a fun one for me to read now. On one hand, a few things have changed, and on another, this is still very much exactly what I believe motivates me.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/img_4322.jpg&quot; alt=&quot;A selfie photo of Nikolaj Stausbøl in the danish public service broadcasting studio. He&apos;s facing the back to the studio setup containing knobs and dials and microphones&quot; title=&quot;I&apos;m looking very enthusiastic and excited as I&apos;m waiting for the interview at DR&apos;s studio in Copenhagen.&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Transparency on AI use:&lt;/strong&gt; &lt;em&gt;For this blog entry I transcribed the Danish conversation with &lt;strong&gt;MacWhisper&lt;/strong&gt; using the &lt;strong&gt;LargeV3&lt;/strong&gt; model and then I processed it with &lt;strong&gt;Claude&lt;/strong&gt; to bring it into a Q&amp;amp;A format in English.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;What does it mean to be a Lead Creative Technologist at Manyone?&lt;/h2&gt;
&lt;p&gt;I first encountered this job title during an internship and was immediately intrigued. After returning to Denmark, I realised no one had this position, but it represented exactly what I wanted to do. I&apos;ve now been working in this field for 10 years. Sometimes you have to wear the clothes for the job you want, start calling yourself what you aspire to be, and eventually, the job materialises.&lt;/p&gt;
&lt;h2&gt;Can you share some examples of projects you&apos;ve worked on?&lt;/h2&gt;
&lt;p&gt;My work spans different areas. Professionally, I often handle consulting tasks or build entire projects from the ground up. Our team conducts research and strategy development, then implements the complete solution. The technical development is typically a significant component.&lt;/p&gt;
&lt;p&gt;I also work on artistic projects, which can range from theatrical performances to VR documentaries. Last year, I created an AR opera. The scope varies depending on whether the goal is entertainment, inspiring new thoughts, or driving business outcomes like increasing revenue.&lt;/p&gt;
&lt;h2&gt;Tell us about the AR opera&lt;/h2&gt;
&lt;p&gt;This is one of the projects I&apos;m most proud of. We premiered an augmented reality opera at Den Frie Exhibition near Østerport last year (2020). Audiences received a phone and headphones, which guided them to a secret room - similar to an escape room experience. Once inside, they spent 10-15 minutes experiencing an opera performance through their phones.&lt;/p&gt;
&lt;p&gt;The experience featured a virtual symphony orchestra and opera singers appearing as augmented reality figures in the space. Unlike Pokemon Go, where characters simply appear in the environment, we created precise one-to-one matches with the physical space. The characters inhabited a specially designed apartment, performing actions like removing their jackets or sitting at the dining table.&lt;/p&gt;
&lt;p&gt;We even integrated smart lighting control to guide audience attention naturally. When we wanted viewers to look toward the bed, for instance, we would dim all other lights and illuminate just that area, creating an intuitive way to direct focus without explicit instructions on the phone screen.&lt;/p&gt;
&lt;h2&gt;You&apos;ve also created Snapchat lenses. Why focus on filters?&lt;/h2&gt;
&lt;p&gt;Filters are accessible - anyone can start creating them. While many of my projects take months to complete, it&apos;s refreshing to create something in an hour that I can share immediately. Companies like Snapchat and Facebook have built powerful engines with advanced capabilities like facial recognition, all packaged in user-friendly tools. As creative developers, we&apos;re living in a golden age with access to these underlying tools that we can extend and build upon.&lt;/p&gt;
&lt;h2&gt;What are your thoughts on the ethical implications of different platforms?&lt;/h2&gt;
&lt;p&gt;I made a conscious decision to explore Snapchat over Instagram/Facebook&apos;s platforms, partly for ethical reasons. While Snapchat isn&apos;t the most ethically sound company to support either, I believe they are exploring interesting alternative approaches to solving similar problems as Facebook. Similarly, when working with NFTs, I chose to work on the Hicetnunc platform because it runs on the Tezos blockchain platform, which has a smaller environmental footprint than Ethereum.&lt;/p&gt;
&lt;h2&gt;How do you view digital ownership and the metaverse?&lt;/h2&gt;
&lt;p&gt;When we look holistically at where things are heading - what Facebook and Snapchat are developing - there&apos;s a movement toward a digital presence that extends beyond our phones. While Zuckerberg&apos;s vision of the metaverse somewhat concerns me, there&apos;s clearly momentum and financial backing behind these virtual worlds.&lt;/p&gt;
&lt;p&gt;The question then becomes: What belongs in these metaverses? This is where I believe NFTs become particularly relevant - not just as digital art in frames, but as cross-platform digital assets. Imagine owning an NFT of a car that works across different platforms like Fortnite or GTA. This creates a genuine sense of ownership that goes beyond simply downloading digital assets. Of course, this raises interesting questions about whether digital ownership itself is beneficial for society.&lt;/p&gt;
&lt;p&gt;In that regards I&apos;ve been experimenting with small things like building &lt;a href=&quot;https://objkt.com/tokens/hicetnunc/39933&quot;&gt;functional clock shaders&lt;/a&gt; that could potentially be embedded into virtual worlds with a simple chromium window as a kind of spatial furniture. It&apos;s impossible to know, but I&apos;d like to think that that is a &quot;healthy&quot; approach to NFT&apos;s for metaverse ownership if they ever kick off: To embrace web technologies more than proprietary technologies, as they are more likely to be applicable in diverse circumstances and to build functioning code into the token.&lt;/p&gt;
&lt;h2&gt;How important is it to you to work with the latest technology?&lt;/h2&gt;
&lt;p&gt;Actually, it&apos;s not particularly important. If you asked about my favorite technology or coding language, I&apos;d say CSS. Many might argue it&apos;s not even a programming language, but I find it fantastic because of its simplicity. I&apos;m more interested in technologies that allow for creative expression rather than just building systems. Sometimes the underlying, older technologies can be more fascinating - like shaders, which create organic visual effects in games and digital experiences.&lt;/p&gt;
&lt;h2&gt;How do you approach the balance between technology and creativity?&lt;/h2&gt;
&lt;p&gt;I see myself as part of a broader creative ecosystem. While we create using technology, we&apos;re really just building the final 20% - we&apos;re standing on the shoulders of giants who&apos;ve created the underlying tools and platforms. This actually liberates us to focus more on creative innovation rather than technical implementation.&lt;/p&gt;
&lt;p&gt;What&apos;s fascinating is how democratised these tools have become. When everyone can access the same powerful technologies, it pushes us to be more creative in how we combine different elements and find unique applications that others haven&apos;t considered.&lt;/p&gt;
&lt;h2&gt;What role do you see for creative technologists in shaping our digital future?&lt;/h2&gt;
&lt;p&gt;In the creative industry, we can have a surprisingly significant impact through our daily work. We can quickly move in and out of projects with NGOs and other organisations, making real differences without being permanent employees. I also believe we have a responsibility to help our society, particularly our policymakers, understand these emerging technologies. By exploring these technologies early, we can contribute to more informed regulations and better outcomes for our communities.&lt;/p&gt;
&lt;h2&gt;What advice would you give to aspiring creative technologists?&lt;/h2&gt;
&lt;p&gt;Make yourself available to others and be willing to learn what&apos;s needed for each project. Whether it&apos;s blockchain art, Snapchat filters, or controlling smart lights for an AR opera, the information is always accessible. You just need to be a bit persistent and maintain a healthy dose of naivety about what&apos;s possible. I&apos;ve found that sometimes it&apos;s better to be a tool for others who have strong creative visions, helping them realise ambitious projects they couldn&apos;t achieve otherwise.&lt;/p&gt;
&lt;p&gt;The tools are more accessible than ever - from Reality Capture for photogrammetry to Unity for game development, from &lt;s&gt;Instagram&apos;s Spark AR&lt;/s&gt; (discontinued) to Snapchat&apos;s Lens Studio. Even complex fields like machine learning have become more approachable through collaborative notebooks where you can experiment without deep technical knowledge. The key is to follow what excites you and start exploring, one tool at a time when it makes sense.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;That&apos;s it for this interview. I don&apos;t know if this will be of value to anyone but me, but I was very excited when I rediscovered it. And sometimes that&apos;s all that matters.&lt;/p&gt;
&lt;/blockquote&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/img_4322.jpg" medium="image"/><enclosure url="https://xopla.net/assets/img_4322.jpg" type="image/png" length="50000"/></item><item><title>A new component for the blog: Experiments</title><link>https://xopla.net/posts/a-new-component-for-the-blog-experiments/</link><guid isPermaLink="true">https://xopla.net/posts/a-new-component-for-the-blog-experiments/</guid><description>Very often I want to test small things like shaders, etc. Here&apos;s some work on a custom component.</description><pubDate>Sat, 16 Nov 2024 23:36:00 GMT</pubDate><content:encoded>&lt;p&gt;Very often I want to test small things like shaders, etc. Here&apos;s some work on a custom component.&lt;/p&gt;
&lt;p&gt;For now what I want is a way to easily develop small scripts and test them in my repository while being able to freely select them from within my CMS. Later on I would like to be able to provide properties from my CMS and hopefully the same experiments can be used throughout the site as text effects or header images.&lt;/p&gt;

&lt;p&gt;This seems to do the trick for now.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/a-new-component-for-the-blog-experiments.png" medium="image"/><enclosure url="https://xopla.net/posts/a-new-component-for-the-blog-experiments.png" type="image/png" length="50000"/></item><item><title>Can an independent Europe better predict our more extreme weather?</title><link>https://xopla.net/posts/an-independent-europe-predicting-our-more-extreme-weather/</link><guid isPermaLink="true">https://xopla.net/posts/an-independent-europe-predicting-our-more-extreme-weather/</guid><description>There is a lot of talk about how Europe can grow more independent from the US. Lately I&apos;ve been thinking a lot on the extreme weather situations in Spain and how we could be better prepared. Is that an example of an area we should develop our own independent technology?</description><pubDate>Sun, 10 Nov 2024 14:04:00 GMT</pubDate><content:encoded>&lt;p&gt;After the US election result there has been a lot of talk about how Europe can grow more independent from the US.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Our Europe today is mortal,&lt;/strong&gt;” “&lt;strong&gt;It can die and that depends solely on our choices,&lt;/strong&gt;” “&lt;strong&gt;it’s today that Europe is between war and peace.&lt;/strong&gt;” - &lt;em&gt;French President Emmanuel Macron, &lt;a href=&quot;https://apnews.com/article/france-macron-europe-eu-paris-sorbonne-speech-a3f4de514a88ca324ed1c545fc3821c1&quot;&gt;APNews&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Really, a great many things are up for renewed attention for our independence:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;European Economy&lt;/li&gt;
&lt;li&gt;European Defence&lt;/li&gt;
&lt;li&gt;European Technology&lt;/li&gt;
&lt;li&gt;European Energy&lt;/li&gt;
&lt;li&gt;And lowering and mitigating climate change in Europe.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Lately I&apos;ve been thinking a lot on the extreme weather situations we&apos;ve seen and how we could be better prepared and it got me thinking: Is that an example of an area we should develop our own independent technology?&lt;/p&gt;
&lt;h2&gt;Truly independent weather data modelling?&lt;/h2&gt;
&lt;p&gt;I am by no means a weather expert or have any background in Meteology and I do know we have amazing scientists at the &lt;a href=&quot;https://www.ecmwf.int/&quot;&gt;European Centre for Medium-Range Weather Forecasts&lt;/a&gt; and local weather agencies, but I can&apos;t help but think (and fear) that, as with so many other areas, we are highly dependent on American technology and forecast techniques.&lt;/p&gt;
&lt;p&gt;It&apos;s an area I&apos;d be curious to look more into as a European citizen. I think one way of approaching our growing problems, is a closer personal connection to weather data – not just through institutions. What kind of weather data is available for open analysis by Europeans? What kinds of methods are we applying to understand it and are we making our own observations on this data?&lt;/p&gt;
&lt;h2&gt;Could more people be engaged with weather forecast and analysis using AI?&lt;/h2&gt;

&lt;p&gt;Back in February I fell over this Vox mini-documentary on how AI can help us predict extreme weather. I was reminded of it again the other day, hearing of &lt;a href=&quot;https://apnews.com/article/floods-spain-valencia-photo-gallery-af444afb79ad40dd4a01e9dcfa95d2ba&quot;&gt;the catastrophe in Valencia&lt;/a&gt;. It would seem a lack of knowledge, ignorance and mistrust in the data was part of the big issue that lead to it going so horribly wrong.&lt;/p&gt;
&lt;p&gt;I know AI has many issues, but, looking at the video, I wish we, as Europeans, could be more ambitious and entrepreneurial on these topics. I&apos;m not talking about building our own AI models in the traditional sense (that&apos;s a whole different conversation). I&apos;m talking about building applications for things like weather analysis with the most powerful tools available and innovating on the technology itself, fully controlling the process – not depending on the US.&lt;/p&gt;
&lt;h2&gt;Is trust in our institutions a disadvantage for critical thinking?&lt;/h2&gt;
&lt;p&gt;Europeans have (comparatively) high trust in their governments and institutions and I know we have extremely talented people working in our research and science agencies, but is it time that more of the general European population starts building in this space?&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/an-independent-europe-predicting-our-more-extreme-weather.png" medium="image"/><enclosure url="https://xopla.net/posts/an-independent-europe-predicting-our-more-extreme-weather.png" type="image/png" length="50000"/></item><item><title>Harvesting and analysing Magic the Gathering (MTG) cards</title><link>https://xopla.net/posts/harvesting-and-analysing-magic-the-gathering-mtg-cards/</link><guid isPermaLink="true">https://xopla.net/posts/harvesting-and-analysing-magic-the-gathering-mtg-cards/</guid><description>Magic: The Gathering card tracking and price prediction using Google Sheets.</description><pubDate>Sat, 03 Aug 2024 00:06:00 GMT</pubDate><content:encoded>&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Disclaimer:&lt;/strong&gt; This entry is not a complete tutorial. I share all of my code, but you&apos;ll have to put two and two together yourself :) Maybe seek help from an AI chatbot of choice.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;A few months ago, I got hooked on playing Magic the Gathering. It&apos;s a game I&apos;ve always been curious about playing, but I never fully tried to before. As a kid, it seemed difficult and expensive and out of reach, but now with a stable income, it&apos;s perfect. It&apos;s a nice distraction from all of the digital noise and a fun way to hang out with friends and family.&lt;/p&gt;
&lt;p&gt;One thing I didn&apos;t expect, was to also getting hooked on the whole &quot;collector&quot; aspect of the cards. I&apos;ve never gambled much, as betting never felt gratifying to me. But with Magic, I&apos;ve found myself drawn into it anyway. I know it&apos;s probably a losing game, but compared to betting on numbers or sports, Magic feels like betting on stocks. New cards are constantly coming out, which makes it interesting for coding prediction algorithms.&lt;/p&gt;
&lt;p&gt;There are many tools out there that allow to track your own cards, but from what I&apos;ve found, few, if any, let you model and play with the data and economics.&lt;/p&gt;
&lt;p&gt;And so, my journey has begun. Whaddayaknow... Making a digital exercise out of it anyways.&lt;/p&gt;
&lt;p&gt;In this entry, I&apos;ll share my initial progress and learnings within this bit of data analysis. This won&apos;t be a tutorial. Just the necessary code and structure to maybe get you started.&lt;/p&gt;
&lt;p&gt;Also, I&apos;m not a finance expert. Don&apos;t take my advice. My goal for myself is to use this as an excuse to learn about datascience and analysis.&lt;/p&gt;
&lt;h2&gt;Let&apos;s get started.&lt;/h2&gt;
&lt;p&gt;Here&apos;s a screenshot of what I&apos;ve built today in Google Sheets with the code in this post. It&apos;s basically a database of individual cards I own. I don&apos;t own many yet, but I have a few I want to start monitoring. A simple start. Down the road I imagine I will try to apply different prediction techniques.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/mtg-dataset.png&quot; alt=&quot;URLs, titles, toggles for foil, purchase price and automated scraping&quot; title=&quot;Screenshot of spreadsheet with Magic the Gathering cards&quot; /&gt;&lt;/p&gt;
&lt;p&gt;It is mostly automated. I wanted to minimise my workload. I&apos;ve configured a system where I only need to press &quot;Add Entry,&quot; paste in a URL, and type in my buying price. From that point, the spreadsheet will collect daily trend data for the card and create a graph for each row.&lt;/p&gt;
&lt;p&gt;The graph is quite basic at this point: a column graph where each column is red if below my buying price and green if above. As I collect more data, I look forward to coding other models. As I said, this is my entry point into learning about data science and potentially some machine learning.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;There are 2 scripts and a bunch of formulas.&lt;/p&gt;
&lt;h2&gt;Script 1: Daily harvest&lt;/h2&gt;
&lt;p&gt;This is the main script of the project. You simply add it by going to &lt;code&gt;Extensions -&amp;gt; App Scripts&lt;/code&gt; and pasting it in. Then in the same interface you can configure it to trigger once every day.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;function fetchPricesDaily() {
  // Get the active spreadsheet and sheet
  var sheet =
    SpreadsheetApp.getActiveSpreadsheet().getSheetByName(&quot;MTG Singles&quot;);

  // We get the XPath which is configured at C2 in the dataset.
  var xpath = sheet.getRange(&quot;C2&quot;).getValue();
  // make sure xpath can be passed to IMPORTXML
  xpath = xpath.replace(/&quot;/g, &quot;&apos;&quot;);

  // Get today&apos;s date in a specific format to use as column header
  var today = new Date();
  var formattedTodayDate = Utilities.formatDate(
    today,
    Session.getScriptTimeZone(),
    &quot;yyyy-MM-dd&quot;
  );

  // Find the last column with date in the header
  var dateRow = 3; // In our setup this is the row with dates on.
  var dateColumn = 1; // We just start looking from the first column
  var includeRows = 1; // We only want to look at this single row

  var lastColumnOnDateRow = sheet
    .getRange(dateRow, dateColumn, includeRows, sheet.getMaxColumns())
    .getValues()[0]
    .filter(String).length;
  var dateRange = sheet.getRange(3, 1, 1, lastColumnOnDateRow); // Get the first row, spanning all columns
  var dates = dateRange.getValues()[0]; // Get the values of the first row as an array
  console.log(dates);
  // Get last array item
  var lastDate = dates[dates.length - 1];
  var formattedLastDate = Utilities.formatDate(
    new Date(lastDate),
    Session.getScriptTimeZone(),
    &quot;yyyy-MM-dd&quot;
  );

  // If the value of the last date isn&apos;t the same as formattedTodayDate, add a new column with formattedTodayDate date
  if (formattedLastDate != formattedTodayDate) {
    dateColumnIndex = lastColumnOnDateRow + 1;
    sheet.getRange(dateRow, dateColumnIndex).setValue(formattedTodayDate);

    // Pull prices for all rows of items
    getPrices(sheet, xpath, dateColumnIndex);
  }
}
function getPrices(sheet, xpath, columnIndex) {
  var dataRange = sheet.getRange(4, 1, sheet.getLastRow() - 3, 8);
  var data = dataRange.getValues();

  // Iterate over each row of data
  data.forEach(function (row, index) {
    var url = row[0];
    if (url == &quot;&quot;) {
      return;
    }

    // Create the IMPORTXML formula to fetch the raw price
    var rawPriceFormula = &apos;IMPORTXML(&quot;&apos; + url + &apos;&quot;; &quot;&apos; + xpath + &apos;&quot;)&apos;;

    // Set the raw price formula on todays date
    sheet.getRange(index + 4, columnIndex).setFormula(rawPriceFormula);
  });

  // Wait for 5 seconds to allow IMPORTXML to complete
  Utilities.sleep(5000);

  // Then iterate over all of the cells again and replace their formulas with their values
  data.forEach(function (row, index) {
    // Replace formulas with values in the range containing IMPORTXML formulas
    var range = sheet.getRange(index + 4, columnIndex);
    var values = range.getValues();
    range.setValues(values);
  });
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The most important part of this script is the `IMPORTXML()` formula. This formula is entered into each cell with a corresponding URL and the XPath that leads to the right dom element on the page.&lt;/p&gt;
&lt;p&gt;The XPath took a bit of trial and error for me, but I think I&apos;ve figured out all of the gotcha&apos;s&lt;/p&gt;
&lt;p&gt;&lt;code&gt;//*[@id=&quot;tabContent-info&quot;]/div/div[contains(@class, &apos;mx-auto&apos;)]/div/div[2]/dl/dt[contains(text(), &apos;Price Trend&apos;)]/following-sibling::dd[1]&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;It&apos;s basically pointing towards a Cardmarket product&apos;s price section and navigating to the &quot;trend&quot; value. It takes care of the edge cases where some cards only exist as foil, which means the &quot;foil&quot; div toggle doesn&apos;t exist in the dom and it would fail. And it also takes care of the situation where a card doesn&apos;t have any reprints at all which meant that the final element couldn&apos;t just be selected with an index, but I found a way using the sibling trick.&lt;/p&gt;
&lt;p&gt;Then, after having added the `IMPORTXML()` formula&apos;s to each cell, you&apos;ll notice that I wait for 5 seconds and then run through the cells again. I do this to overwrite the formula with the scraped value. This turned out to be important because the IMPORTXML formula could (and would) get triggered in cells in previous days and that will obviously corrupt the data. So this way, I only leave a regular number behind in each cell.&lt;/p&gt;
&lt;h2&gt;Script 2: Adding an entry&lt;/h2&gt;
&lt;p&gt;This is a small quality of life improvement. I made it easy to add an entry with a button. This ensures that my dataset doesn&apos;t have a lot of empty rows with formulas in them. The reason why that&apos;s important, is that every time I sorted by smallest/highest values, these empty formula rows would mix in as well. So to fix this, I&apos;m not pre-adding formulas to the empty rows, but in stead adding them with a button&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;function addEntry() {
  var sheet = SpreadsheetApp.getActiveSpreadsheet().getActiveSheet();
  var lastRow = sheet.getLastRow();
  // Insert the new row and set values
  sheet.insertRowAfter(lastRow);

  // Set the predefined values in the new row (except for &quot;Is Foil&quot; and SPARKLINE)
  sheet.getRange(lastRow + 1, 1).setValue(&quot;Fill&quot;);
  sheet.getRange(lastRow + 1, 5).setValue(&quot;Fill&quot;);

  var productNameFormula =
    &quot;=SUBSTITUTE(SUBSTITUTE(INDEX(SPLIT(INDEX(SPLIT(A&quot; +
    (lastRow + 1) +
    &apos; ; &quot;/&quot;) ; COUNTA(SPLIT(A&apos; +
    (lastRow + 1) +
    &apos; ; &quot;/&quot;)) - 0) ; &quot;?&quot;) ; 1) ; &quot;-&quot; ; &quot; &quot;) ; &quot;Fill&quot; ; &quot;&quot;)&apos;;
  sheet.getRange(lastRow + 1, 2).setFormula(productNameFormula);

  var setNameFormula =
    &quot;=SUBSTITUTE(SUBSTITUTE(INDEX(SPLIT(A&quot; +
    (lastRow + 1) +
    &apos; ; &quot;/&quot;); COUNTA(SPLIT(A&apos; +
    (lastRow + 1) +
    &apos; ; &quot;/&quot;)) - 1) ; &quot;-&quot; ; &quot; &quot;) ; &quot;Fill&quot; ; &quot;&quot;)&apos;;
  sheet.getRange(lastRow + 1, 3).setFormula(setNameFormula);

  var isFoilFormula = &quot;=REGEXMATCH(A&quot; + (lastRow + 1) + &apos;;&quot;isFoil=Y&quot;)&apos;;
  sheet.getRange(lastRow + 1, 4).setFormula(isFoilFormula);

  var sparklineFormula =
    &quot;=SPARKLINE(ARRAYFORMULA(IF(G&quot; +
    (lastRow + 1) +
    &quot;:ZY&quot; +
    (lastRow + 1) +
    &apos; = &quot;&quot;; G&apos; +
    (lastRow + 1) +
    &quot;:ZY&quot; +
    (lastRow + 1) +
    &quot;; G&quot; +
    (lastRow + 1) +
    &quot;:ZY&quot; +
    (lastRow + 1) +
    &quot; - E&quot; +
    (lastRow + 1) +
    &apos;)); {&quot;charttype&quot;\\&quot;column&quot;; &quot;negcolor&quot;\\&quot;red&quot;; &quot;color&quot;\\&quot;green&quot;; &quot;nan&quot;\\&quot;ignore&quot;; &quot;empty&quot;\\&quot;ignore&quot;; &quot;rtl&quot;\\false})&apos;;
  sheet.getRange(lastRow + 1, 6).setFormula(sparklineFormula);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Spreadsheet Formulas&lt;/h2&gt;
&lt;p&gt;As soon as the URL is pasted into the first column, I extract the cards title, set name and if it&apos;s a foil from the URL and create a bar chart. In the examples below the number `9` represents the current row.&lt;/p&gt;
&lt;h3&gt;Title&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;=IFERROR(SUBSTITUTE(INDEX(SPLIT(INDEX(SPLIT(A9 ; &quot;/&quot;) ; COUNTA(SPLIT(A9 ; &quot;/&quot;)) - 0) ; &quot;?&quot;) ; 1) ; &quot;-&quot; ; &quot; &quot;) ; &quot;&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Set&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;=IFERROR(SUBSTITUTE(INDEX(SPLIT(A9 ; &quot;/&quot;); COUNTA(SPLIT(A9 ; &quot;/&quot;)) - 1) ; &quot;-&quot; ; &quot; &quot;) ; &quot;&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Foil&lt;/h3&gt;
&lt;pre&gt;&lt;code&gt;=REGEXMATCH(A9;&quot;isFoil=Y&quot;)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Column Chart&lt;/h3&gt;
&lt;p&gt;Finally I have a simple formula for creating a column chart for each row. Eventually, as scraping continue each row will have a red/green graph that reveals if it has dropped or gained price (compared to my buying price).&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;=SPARKLINE(ARRAYFORMULA(IF(G9:ZY9 = &quot;&quot;; G9:ZY9; G9:ZY9 - E9)); {&quot;charttype&quot;\&quot;column&quot;; &quot;negcolor&quot;\&quot;red&quot;; &quot;color&quot;\&quot;green&quot;; &quot;nan&quot;\&quot;ignore&quot;; &quot;empty&quot;\&quot;ignore&quot;; &quot;rtl&quot;\false})
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; All of these formulas are as mentioned not directly typed into the fields by me. I created the &quot;Add Entry&quot; button script to serve that purpose. They look a little different there inside the script, but in reality, this is what they look like.&lt;/p&gt;
&lt;p&gt;Alright. That’s it! That’s pretty much all there is to it. Now It will just scrape card data every day and the next step would be to figure out what to do with it.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Now, back to playing with friends and family!&lt;/em&gt;&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/mtg-dataset.png" medium="image"/><enclosure url="https://xopla.net/assets/mtg-dataset.png" type="image/png" length="50000"/></item><item><title>Extending RealityKit with new easing functions</title><link>https://xopla.net/posts/realitykit-extended-with-new-easing-functions/</link><guid isPermaLink="true">https://xopla.net/posts/realitykit-extended-with-new-easing-functions/</guid><description>For an Apple Vision Pro project I&apos;m building, I was in need of more easing functions than the ones AnimationTimingFunctions come with in SwiftUI / RealityKit.</description><pubDate>Sun, 19 May 2024 23:09:00 GMT</pubDate><content:encoded>&lt;p&gt;For an Apple Vision pro project I&apos;m working on in SwiftUI / RealityKit, I was in need of more easing functions than the build in ones that &lt;a href=&quot;https://developer.apple.com/documentation/realitykit/animationtimingfunction&quot;&gt;AnimationTimingFunction&lt;/a&gt; come with (&lt;em&gt;.linear, .easeIn, .easeOut, .easeInOut&lt;/em&gt;). While I love that you can do your own cubic beziers, I don&apos;t love the thought of hard coding them into my application.&lt;/p&gt;
&lt;p&gt;So I did as anyone would do and started looking for a way to get more easing functions. Essentially I would love to be able to call any of the functions available at &lt;a href=&quot;https://easings.net/&quot;&gt;easings.net&lt;/a&gt;. Those are somewhat standard in animation by now and easy to grasp.&lt;/p&gt;
&lt;h2&gt;How to implement the code&lt;/h2&gt;
&lt;p&gt;The nice thing about SwiftUI is, that it&apos;s really easy to extend features. You simply create a swift file anywhere in your project, add your extension code and it magically connects to whatever you want it to extend. For me it&apos;s really felt like magic the past weeks as I&apos;ve gotten used to how easy this is to do.&lt;/p&gt;
&lt;p&gt;Anyway, here is the code. Drop it into a swift file named &lt;code&gt;AnimationTimingFunction&lt;/code&gt;. Then you can start using it. For example when you animate entities: &lt;br /&gt;
&lt;code&gt;entity.move(to: targetTransform, relativeTo: entity.parent, duration: duration, timingFunction: .easeOutCubic)&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import RealityKit

extension AnimationTimingFunction {
    static var easeInSin: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.47, 0.0), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.745, 0.715))
    }

    static var easeOutSin: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.39, 0.575), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.565, 1.0))
    }

    static var easeInOutSine: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.445, 0.05), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.55, 0.95))
    }

    static var easeInQuad: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.55, 0.085), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.68, 0.53))
    }

    static var easeOutQuad: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.25, 0.46), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.45, 0.94))
    }

    static var easeInOutQuad: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.455, 0.03), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.515, 0.955))
    }

    static var easeInCubic: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.55, 0.055), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.675, 0.19))
    }

    static var easeOutCubic: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.215, 0.61), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.355, 1.0))
    }

    static var easeInQuart: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.895, 0.03), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.685, 0.22))
    }

    static var easeOutQuart: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.165, 0.84), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.44, 1.0))
    }

    static var easeInOutQuart: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.77, 0.0), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.175, 1.0))
    }

    static var easeInQuint: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.755, 0.05), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.855, 0.06))
    }

    static var easeOutQuint: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.23, 1.0), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.32, 1.0))
    }

    static var easeInOutQuint: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.86, 0.0), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.07, 1.0))
    }

    static var easeInExpo: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.95, 0.05), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.795, 0.035))
    }

    static var easeOutExpo: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.19, 1.0), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.22, 1.0))
    }

    static var easeInOutExpo: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(1.0, 0.0), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.0, 1.0))
    }

    static var easeInCirc: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.6, 0.04), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.98, 0.335))
    }

    static var easeOutCirc: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.075, 0.82), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.165, 1.0))
    }

    static var easeInOutCirc: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.785, 0.135), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.15, 0.86))
    }

    static var easeInBack: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.6, -0.28), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.735, 0.045))
    }

    static var easeOutBack: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.175, 0.885), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.32, 1.275))
    }

    static var easeInOutBack: AnimationTimingFunction {
        return .cubicBezier(controlPoint1: SIMD2&amp;lt;Float&amp;gt;(0.68, -0.55), controlPoint2: SIMD2&amp;lt;Float&amp;gt;(0.265, 1.55))
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Thoughts on learning to write in Swift&lt;/h2&gt;
&lt;p&gt;I don&apos;t know why, but writing SwiftUI code brings me a lot of joy. It&apos;s by no means a perfect language, but it&apos;s weirdly fun and straight forward... even though the syntax still looks scary when I look across my code. Maybe it has more to do with XCode and the way it compiles it and how you can easily preview subviews and 3D models in isolation in the canvas? Whatever it is, it gives me good energy.&lt;/p&gt;
&lt;p&gt;Anyway, going forward I hope to share more chunks of useful code snippets like this. (Handling transitions, string trimming, handling gestures, working with attachments... stuff like that), so feel free to subscribe to my RSS feed.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/realitykit-extended-with-new-easing-functions.png" medium="image"/><enclosure url="https://xopla.net/posts/realitykit-extended-with-new-easing-functions.png" type="image/png" length="50000"/></item><item><title>Once again, Apple proves that less is more</title><link>https://xopla.net/posts/once-again-apple-proves-that-less-is-more/</link><guid isPermaLink="true">https://xopla.net/posts/once-again-apple-proves-that-less-is-more/</guid><description>How the Apple Vision Pro use less design to enhance immersion, and reduce cognitive load with use of natural dead zones.</description><pubDate>Mon, 13 May 2024 23:18:00 GMT</pubDate><content:encoded>&lt;p&gt;One of the most striking design principles of the Apple Vision Pro is its absence of &quot;stuff attached to your hands&quot;. The unexpected lack of visible interfaces on your palms or wrists significantly enhances the daily experience, contrasting sharply with what&apos;s typical on other platforms.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/apple-vision-pro-pinch-fingers.png&quot; alt=&quot;Photo from the Apple Vision Pro press material of a person doing a pinch gesture&quot; title=&quot;Pinch gesture for the Apple Vision pro&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;Why is this important?&lt;/h2&gt;
&lt;p&gt;When the Quest (Meta&apos;s competing VR headset) was initially launched in 2019, it lacked hand tracking, necessitating the use of controllers with a built-in menu button.&lt;/p&gt;
&lt;p&gt;Later, with the introduction of hand tracking, Meta relocated these buttons to a small floating menu that appears when you glance at your right hand. However, as you use the device and the small menu intrudes across experiences, it has significant implications:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;For &lt;strong&gt;immersive experiences&lt;/strong&gt;, this menu is a disruptive element. Its distinct micro-aesthetic, icons, and transitions clash with the designed environment, shattering immersion.&lt;/li&gt;
&lt;li&gt;In &lt;strong&gt;interactive experiences&lt;/strong&gt;, this menu monopolises a key opportunity for interactions. Placing other stuff there becomes a hot mess.&lt;/li&gt;
&lt;li&gt;For &lt;strong&gt;first-time users&lt;/strong&gt;, particularly at events like fairs or in museum settings, there’s a risk of accidental exits from the application due to unintended interactions with this menu.&lt;/li&gt;
&lt;li&gt;Most crucially, &lt;strong&gt;frequent users&lt;/strong&gt; subconsciously adapt their behaviour to avoid the &apos;illegal&apos; space on their right hand, much like tuning out a persistent annoyance. This adjustment detract from the natural engagement across all experiences.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;Apple Vision Pro&apos;s design principle&lt;/h2&gt;
&lt;p&gt;The Apple Vision Pro introduces a radical design principle: &lt;em&gt;Nothing&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;No intruding menus, and the primary interactions are look, pinch, and drag. For me, it is a revelation to experience. This might sound dramatic, but when you are accustomed to the Quest menu following you like a little fly, I found it refreshing to see nothing when I looked at my hands.&lt;/p&gt;
&lt;p&gt;It&apos;s best described as a significant release of tension; a part of my brain can finally relax.&lt;/p&gt;
&lt;p&gt;By extension, this underscores the impact of ecosystems and how profoundly small design decisions shape our experiences. Every time you solve something with more UI, you add cognitive load. Less really is more!&lt;/p&gt;
&lt;h2&gt;Where did the menu go?&lt;/h2&gt;
&lt;p&gt;I lied before. There still is a menu. If you&apos;ve used the Apple Vision Pro, you know if you gaze up towards your eyebrows, a small arrow appears. Pinching while looking at it expands a menu, reminiscent of the iPhone’s control center.&lt;/p&gt;
&lt;p&gt;So why do I think this differ from the menu on Meta’s Quest Horizon platform?&lt;/p&gt;
&lt;p&gt;My observation is that many VR experiences naturally involve your hands, creating a &apos;hot zone&apos; of activity around them. In contrast, the space between your eyebrows usually remains unutilised—a &apos;natural dead zone.&apos; Minimal global UI here is both unobtrusive and intuitively accessible.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/apple-vision-pro-pinch-fingers.png" medium="image"/><enclosure url="https://xopla.net/assets/apple-vision-pro-pinch-fingers.png" type="image/png" length="50000"/></item><item><title>Building for Apple Vision Pro. How I’m adjusting and what I think.</title><link>https://xopla.net/posts/building-for-apple-vision-pro-how-im-adjusting-and-what-i-think/</link><guid isPermaLink="true">https://xopla.net/posts/building-for-apple-vision-pro-how-im-adjusting-and-what-i-think/</guid><description>My experience with iOS app development is very basic. Here’s how I’ve been using my experience in other languages to my advantage.</description><pubDate>Fri, 10 May 2024 09:48:00 GMT</pubDate><content:encoded>&lt;p&gt;I’ve been working on an Apple Vision Pro application for a while and I’m completely blown away by the experience! Here is how and why I’m so impressed.&lt;/p&gt;
&lt;h2&gt;What I’ve done till now&lt;/h2&gt;
&lt;p&gt;I’m building it all natively in SwiftUI with RealityKit. At this stage I’ve now build different spatial windows navigating to other windows, with nested page lists and windows that change state, windows with portals in them, login flows, spatial volumes with portals in them, spatial immersive spaces that hide all other apps, pulling external live data and visualising in said spaces, using more elaborate hand tracking and scene understanding with spatial anchors so each surface is passed to my app and I can use walls, tables, floors ceilings.&lt;/p&gt;
&lt;p&gt;I got all of this working + refactored my code nicely into (what I believe is) best practice structure in a month or so.&lt;/p&gt;
&lt;p&gt;And here’s the catch: I have never build a native iOS app before! Ever.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/img_0997.jpeg&quot; alt=&quot;A 3D render of the Apple Vision Pro without the face mount&quot; title=&quot;The Apple Vision Pro&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;How I was able to get there so fast&lt;/h2&gt;
&lt;p&gt;The key for me to be able to progress so fast has been to use all of my experience in other languages to establish what I imagine should be possible and “the right way” in this one.&lt;/p&gt;
&lt;p&gt;I describe to ChatGPT how I would have coded it in language X and then go back and forth to refine the result.&lt;/p&gt;
&lt;p&gt;I don’t think I could have done this without all of my pre-knowledge. I’ve build tons of Unity applications, hundreds of websites in dousins of framework and even some React Native stuff. Most of my Unity projects and WebXR projects utilise ARKit features so that has been a helpful compass in knowing what to expect from Apple. I basically know all the questions to ask and that there is an answer in there.&lt;/p&gt;
&lt;p&gt;I would for example ask:&lt;/p&gt;
&lt;p&gt;“In language X I would create a responsive menu in this way using relative values and a min max value. Is there an equivalent way in SwiftUI? Or an entirely different approach I should consider?”&lt;/p&gt;
&lt;p&gt;It would return something to me and from the gist of it I would know if it sorta looks right and I would simply just try it. There might be some errors so I ask for each of them. Then at some point the code technically works and I start querying a new ChatGPT instance about “if this is the best practice structure and how I could improve it”.&lt;/p&gt;
&lt;p&gt;Through this way I began establishing the scaffolding of my app and also my own understanding of the language improved immensely.&lt;/p&gt;
&lt;h2&gt;Learning by chatting&lt;/h2&gt;
&lt;p&gt;At this point I’ve learned so much that half the time I’m not asking anymore. Or I already know how to structure my code better so I do it manually.&lt;/p&gt;
&lt;p&gt;And all of this while XCode has been a dream kit to work in! Every time I press build my app shows up wirelessly on our Apple Vision Pro device in seconds.&lt;/p&gt;
&lt;h2&gt;Takeaway&lt;/h2&gt;
&lt;p&gt;Everything. Just. Works. And that is a new experience after years and years of half baked sdk’s and changing game engines.&lt;/p&gt;
&lt;p&gt;The quality of life that Apple developers get is really not talked enough about between all the other news about Apple! This is a total game changer for me and I should have done it earlier.&lt;/p&gt;
&lt;p&gt;I’ve now begun building other projects, that I would normally have made in Unity, as native iOS apps.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/img_0997.jpeg" medium="image"/><enclosure url="https://xopla.net/assets/img_0997.jpeg" type="image/png" length="50000"/></item><item><title>How will we design the talent that will design our future media landscapes?</title><link>https://xopla.net/posts/how-do-we-design-the-talent-that-will-design-our-future-media-landscapes/</link><guid isPermaLink="true">https://xopla.net/posts/how-do-we-design-the-talent-that-will-design-our-future-media-landscapes/</guid><description>Reflections on DMJX&apos;s new campus opening, insights from Danish Royalty on AI/design, embracing lifelong learning, and innovation in design education.</description><pubDate>Thu, 25 Apr 2024 11:26:00 GMT</pubDate><content:encoded>&lt;p&gt;Yesterday, I had the privilege of attending the opening ceremony for the new DMJX campus in Copenhagen. It was a nostalgic and inspiring experience, filled with familiar faces from my time as a student and later as an external instructor at the school.&lt;/p&gt;
&lt;p&gt;One of the highlights was the opening speech by His Majesty King Frederik of Denmark. Surprisingly, a significant portion of his address centered around the blurred nature of artificial intelligence (AI) and design, and the role this institution of visual communication will play in shaping our future media landscape within a democratic society. How will our students adapt to the integration of AI? More importantly, how will they maintain control over the results and their own critical thinking abilities?&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/his_majesty_king_frederik_of_denmark.jpg&quot; alt=&quot;A picture I took of His Majesty, King Frederik of Denmark as he held his speech at DMJX, looking straight into my camera&quot; title=&quot;His Majesty King Frederik at DMJX&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The speeches from the school&apos;s headmaster, Julie Sommerlund, and the chairperson of the board, Lea Korsgaard, further reinforced the notion that students today must embrace lifelong learning due to the rapid pace of change. They emphasised that design and visual communication extend far beyond screens and print, encompassing the very fabric of the world we inhabit.&lt;/p&gt;
&lt;p&gt;As an alumnus, I couldn&apos;t help but feel a profound sense of affirmation that this institution will continue to nurture great minds in design – individuals who approach their craft with humility, prioritizing human-centered solutions above all else.&lt;/p&gt;
&lt;p&gt;Reflecting on my involvement over the past eight years, teaching a three-week course in coding for immersive media to fourth-semester students in the Interaction Design program, I&apos;m filled with excitement for the future. The launch of the &quot;Coded Design&quot; program last year, aimed at educating coders who think and work like designers, represents an innovative step forward.&lt;/p&gt;
&lt;p&gt;I really can&apos;t wait to be witnessing the growth and evolution of this old school in a new modern building and really, I just look forward to the opportunity to contribute.&lt;/p&gt;
&lt;p&gt;This school could be a bright light in a murky media future. Here&apos;s hoping!&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/his_majesty_king_frederik_of_denmark.jpg" medium="image"/><enclosure url="https://xopla.net/assets/his_majesty_king_frederik_of_denmark.jpg" type="image/png" length="50000"/></item><item><title>Exploring node based webgl design</title><link>https://xopla.net/posts/exploring-node-based-webgl-design/</link><guid isPermaLink="true">https://xopla.net/posts/exploring-node-based-webgl-design/</guid><description>Trying out the cables.gl webgl framework. Balancing lots of particles and performance.</description><pubDate>Sun, 21 Apr 2024 13:22:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve been playing around with cables.gl. Took a little focus to find out how data flows and the thing about using triggers, but essentially it holds a lot of similarities to other node based systems. Quite a lot from the Meta Spark Studio is similar. And a bunch from Unity&apos;s Shadergraph too.&lt;/p&gt;
&lt;p&gt;I have managed to figured out how to split arrays of points into arrays of Vector3&apos;s that I can then adjust with random time values to create the illusion of a dynamic particle simulation.&lt;/p&gt;
&lt;p&gt;Click and drag to rotate (vertical axis rotation locked). Scroll to zoom.&lt;/p&gt;
&lt;p&gt;Really a fascinating tool! I could see how this might be preferable to writing three.js directly. Will probably create a lot more experiments like this.&lt;/p&gt;

&lt;p&gt;The visual style of the experiment was inspired by this picture from &lt;a href=&quot;https://www.behance.net/gallery/70089605/Big-Bang-poster-series&quot;&gt;The Big Bang poster series&lt;/a&gt; by &lt;strong&gt;Other Peter&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/big-bang-poster-810x1080-from-behance.jpg&quot; alt=&quot;A black and white picture of a planet in shadows with a white falling meteor falling vertically from above. Lots of grain&quot; title=&quot;Falling star poster inspired by The Big Bang by Other Peter&quot; /&gt;&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/big-bang-poster-810x1080-from-behance.jpg" medium="image"/><enclosure url="https://xopla.net/assets/big-bang-poster-810x1080-from-behance.jpg" type="image/png" length="50000"/></item><item><title>Getting on threads</title><link>https://xopla.net/posts/2024-03-08-getting-on-threads/</link><guid isPermaLink="true">https://xopla.net/posts/2024-03-08-getting-on-threads/</guid><description>Thoughts on getting on the new Threads platform and using it as a live catalogue for my research.</description><pubDate>Fri, 08 Mar 2024 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;I’m honestly surprised! Threads is… nice people posting nice things 💁‍♂️ I will post the things I usually daily pm myself on Slack to dig into later. 🔬&lt;/p&gt;
&lt;p&gt;Let’s see how it plays out, but for now I like the feeling of Threads. It’s a simpler system and people are nice and there are already a lot of interesting people active there! Hopefully a step towards a better, less platform centric, internet. (Depending on how it connects with the rest of the fediverse) 🤞&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/2024-03-08-getting-on-threads.png" medium="image"/><enclosure url="https://xopla.net/posts/2024-03-08-getting-on-threads.png" type="image/png" length="50000"/></item><item><title>What could a &quot;personal&quot; spatial computing device do to a society?</title><link>https://xopla.net/posts/what-could-a-personal-spatial-computing-device-do-to-a-society/</link><guid isPermaLink="true">https://xopla.net/posts/what-could-a-personal-spatial-computing-device-do-to-a-society/</guid><description>The Apple Vision Pro raises questions about public access to technology. How will personal spatial computing affect digital equality?</description><pubDate>Tue, 13 Feb 2024 11:28:00 GMT</pubDate><content:encoded>&lt;p&gt;The thing that really stands out and gives me pause with the new Apple Vision Pro device, is just how bound to a person it is. For many, especially if you buy it for yourself, &quot;a personal device&quot; is a great feature!&lt;/p&gt;
&lt;p&gt;Every interaction through the Vision Pro will be a reflection of yourself because you are logged in with your Apple ID. That&apos;s great! If you bought it yourself, that&apos;s exactly what you want. It&apos;s a smart move by Apple and from a strategical point I get it: No doubt this will actually drive adoption on a consumer level. It&apos;s a personal device, not a shared piece of equipment.&lt;/p&gt;
&lt;h2&gt;Alright. Then what?&lt;/h2&gt;
&lt;p&gt;Denmark is (by some measures) the most digital country in the world. Did you know that iOS has a 67% market share here and only 57% in the US?&lt;/p&gt;
&lt;p&gt;While we are a quite wealthy country, libraries (as I&apos;m sure is similar in many other countries) plays a role in making sure access to tech is equal. &quot;Doing your taxes with 1 click&quot;, still assumes you have physical access to a computing device and there are of course many reasons why a citizen might not have that access. Libraries grant anyone free unrestricted access to computers, Playstations and even tech support and support with the digital government office (Borger).&lt;/p&gt;
&lt;p&gt;Libraries here basically try, with all means available, to ensure everyone have as equal access to technology as possible. This builds on years of computing devices that allows shared access. Anyone can freely enter the building, in some cases at any time of the day, and start to use a computer for pretty much anything. Your library card is your only &quot;account&quot;.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/library-meta.jpg&quot; alt=&quot;A heavily photoshoppped photo of a library. Looks distorted and very digital&quot; title=&quot;Future of libraries&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The Reality of Personal Computing&lt;/h2&gt;
&lt;p&gt;You have probably guessed what I&apos;m getting at: As we (if we) transition to personal spatial computing devices, tied to our individual retinas, as a significant utility in our society, what will happen to public computing access?&lt;/p&gt;
&lt;p&gt;Specialists of different traits often cluster in the largest cities in the country. Spatial computing is especially positioned to bridge borders and remove barriers to access to experts like doctors, psychologists and teachers.&lt;/p&gt;
&lt;p&gt;It&apos;s a computer. The Apple Vision Pro is promoted as a Macbook for your face, not an iPhone, so wouldn&apos;t a public spatial computing device in rural libraries make a lot of sense? Just like computers have? Could it challenge the &quot;urban privilege&quot; and play a positive role in counterurbanization?&lt;/p&gt;
&lt;p&gt;Almost every election year in Denmark, &quot;decentralization&quot; is a hot topic and as a highly digital country, technology always plays a role in it. With &quot;Spatial Computing&quot; being so well positioned to bridge divides, how do we feel about that the designers primarily want it to be &quot;personal&quot;?&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/library-meta.jpg" medium="image"/><enclosure url="https://xopla.net/assets/library-meta.jpg" type="image/png" length="50000"/></item><item><title>A book about the world of code, written right before it all changes.</title><link>https://xopla.net/posts/a-book-about-the-world-of-code-written-right-before-it-all-changes/</link><guid isPermaLink="true">https://xopla.net/posts/a-book-about-the-world-of-code-written-right-before-it-all-changes/</guid><description>This mindblowing book on live coding &amp; creative tech arrived just before AI changes everything!</description><pubDate>Tue, 05 Dec 2023 11:03:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;m finally taking time to fully in-depth read what I believe will be an epoch-defining book! It&apos;s been a deep dive into a world in the shadows I&apos;ve long wanted to understand my own place in - a world which, with AI, will surely experience a boom as human-machine relations start to blur.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/1701774223337.jpg&quot; alt=&quot;Picture of the book &amp;quot;Live Coding, A user&apos;s manual&amp;quot; taken indoor with a window with snowy trees outside.&quot; title=&quot;Live Coding, A user&apos;s manual&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The book is called “Live Coding, A user’s manual” and it might surprise you that it&apos;s less about code than a philosophical reflection of time and creation. There&apos;s practically no actual code in it! 😉 If you&apos;re longing for modern thoughts on the human brain and how we interface with technology through performance, this is absolutely worth picking up.&lt;/p&gt;
&lt;h1&gt;A Unique Snapshot Before Everything Changed&lt;/h1&gt;
&lt;p&gt;Interestingly, this book was written riiiight before AI and LLM&apos;s own underground creative communities blossomed. With future eyes, it&apos;s going to provide a unique look at human-machine relations right before the moment everything changed. I wouldn&apos;t be surprised if the book and the live coding community will transcend into a second order with a new generation of live coders, requiring a natural second edition of the book.&lt;/p&gt;
&lt;p&gt;And yes, do people look weirdly at me when I bring out this bad boy at the spa house? Absolutely. Does it cleanse my soul and open my mind? 💯💆‍♂️ (Writing this as I&apos;m enjoying vacation in the Yasuragi Spa house in Stockholm. Major recommendation!)s&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/1701774223337.jpg" medium="image"/><enclosure url="https://xopla.net/assets/1701774223337.jpg" type="image/png" length="50000"/></item><item><title>Separating Objects in Gaussian Splatting Scenes</title><link>https://xopla.net/posts/separating-objects-in-gaussian-splatting-scenes/</link><guid isPermaLink="true">https://xopla.net/posts/separating-objects-in-gaussian-splatting-scenes/</guid><description>A look at Mask3D and how it might enable us to separate and animate objects in Gaussian Splat scenes.</description><pubDate>Mon, 30 Oct 2023 15:43:00 GMT</pubDate><content:encoded>&lt;p&gt;As Gaussian Splatting is blasting forward in public awareness, one thing I&apos;ve been asked frequently in my master classes has been: &quot;How do I separate and animate individual objects in a Gaussian Splatting (or NeRF) scene?&quot;&lt;/p&gt;
&lt;p&gt;While quite a lot still has to happen before we can simply pull parts apart in 3D space and infill the scene behind them (especially at runtime), some interesting developments are going on...&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/mask3d_table.jpg&quot; alt=&quot;Screenshot of a Mask3D render where certain parts of a 3D scanned room have different colors.&quot; title=&quot;Mask3D&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The Power of Mask3D&lt;/h2&gt;
&lt;p&gt;The latest development is Mask3D: Mask Transformer for 3D Instance Segmentation. The idea is beautifully simple - you provide the model with a .ply pointcloud file and it will segment the cloud for you, grouping the points into labeled segments for which you could then do anything with... for example: You could move them apart in 3D space.&lt;/p&gt;
&lt;p&gt;The cool thing is that Gaussian Splat scenes happen to be .ply files. So there&apos;s that immediate connection.&lt;/p&gt;
&lt;p&gt;From where I look, it shouldn&apos;t take too much engineering to figure out how to animate objects as it&apos;s just point data. They have a demo page that looks pretty straight forward, so could be an interesting experiment to document and share. The next days we might look into it at Manyone.&lt;/p&gt;
&lt;h2&gt;Different Approaches to Object Separation&lt;/h2&gt;
&lt;p&gt;Previously on this topic I looked into something called Distilled Feature Fields. That&apos;s still super relevant, as the techniques are in two different spatial domains. DFF is field-based, while Mask3D is point-based and there&apos;s a multidimensional difference between their potential. Just like NeRFs are just temporarily taking a break and lets Gaussian Splatting shine while our hardware catches up to the potential for runtime NeRFs.&lt;/p&gt;
&lt;p&gt;Links:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/JonasSchult/Mask3D&quot;&gt;Mask3D: Mask Transformer for 3D Instance Segmentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://www.xopla.net/posts/nerf-segmentation-as-distilled-feature-fields/&quot;&gt;Distilled Feature Fields&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/mask3d_table.jpg" medium="image"/><enclosure url="https://xopla.net/assets/mask3d_table.jpg" type="image/png" length="50000"/></item><item><title>The Death of Genuine Conversation in the Age of AI</title><link>https://xopla.net/posts/the-death-of-genuine-conversation-in-the-age-of-ai/</link><guid isPermaLink="true">https://xopla.net/posts/the-death-of-genuine-conversation-in-the-age-of-ai/</guid><description>As AI tools become more integrated into our daily communication, I reflect on the consequences for genuine human interaction and why I&apos;m concerned about where we&apos;re heading.</description><pubDate>Thu, 16 Mar 2023 14:45:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve spent a substantial amount of time researching and working with GPT-4 and other AI tools. And I&apos;m honestly tired of the unbalanced discourse happening right now. It&apos;s possible to be both curious and optimistic about AI while also reflecting on its consequences. I for one will not subscribe to the cult-like behavior we&apos;re seeing on LinkedIn and Twitter anymore.&lt;/p&gt;
&lt;h2&gt;What no one seems to be talking about&lt;/h2&gt;
&lt;p&gt;Here&apos;s some of what I think is not being said by people like me:&lt;/p&gt;
&lt;p&gt;Imagine every single message you receive anywhere could have been fine-tuned, made longer, or generated entirely from scratch. That&apos;s where we&apos;re heading. Just watch the latest Google tool trailer for Google Workspaces with its &quot;I&apos;m feeling lucky&quot; button for any conversation. It fucking sucks.&lt;/p&gt;
&lt;p&gt;Truth will die. No conversation will be genuine anymore. You can soon not trust any single interaction by another human. Not your loved ones, not your friends, not random encounters. Anyone could be copiloting their life, and probably most will.&lt;/p&gt;
&lt;p&gt;The human race will plateau creatively. Individually, we will feel infinitely enlightened in this moment. Nothing greater has ever been given to us. But as a global organism, this is the death and final stage of enlightenment for all of us. The amount of data remixing around us will overshadow and drown out any new real thoughts.&lt;/p&gt;
&lt;h2&gt;The conflict within me&lt;/h2&gt;
&lt;p&gt;I&apos;m curious about technology. I&apos;m curious about AI. I can find many cases where AI is only a small supplement to human interactions and where it elegantly releases potential we have in us. Sadly, I don&apos;t think that&apos;s where we&apos;re going to stay.&lt;/p&gt;
&lt;p&gt;Everywhere around us, AI tools will leak in and remove genuine interactions, leaving us with a weird feeling of AI-generated content all around us. Each of us will see it everywhere: in emails, text messages, video, advertisements, news, and real-life conversations using smart glasses soon. Yeah, the person in front of you might be copiloted soon.&lt;/p&gt;
&lt;h2&gt;A pessimistic outlook&lt;/h2&gt;
&lt;p&gt;Pessimistic, yeah. I know. Sorry. But that&apos;s the real talk people like me should also be sharing... All is not great.&lt;/p&gt;
&lt;p&gt;Maybe I&apos;m wrong, but what if I&apos;m not? &quot;The death of all genuine conversations.&quot; It fucking sucks.&lt;/p&gt;
&lt;p&gt;As a small drop in the bucket, I&apos;ve added this to my email signature:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&quot;I won&apos;t be using AI to generate my part of the conversation and I 🤞 you won&apos;t either. Let&apos;s keep conversations human and genuine.&quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;h2&gt;A plea for humanity&lt;/h2&gt;
&lt;p&gt;Let&apos;s try to stay genuine. I&apos;d rather get a short, clear, and genuine message from you than some generated convoluted copilot plur.&lt;/p&gt;
&lt;p&gt;I feel like we&apos;re turning a blind eye to the potential loss of what makes human connection special. While most technologists focus on capabilities and potential benefits, few are willing to address the deeper implications for human-to-human interaction.&lt;/p&gt;
&lt;p&gt;At the risk of sounding like a luddite (which I&apos;m definitely not), I think we need to have an honest conversation about how much we want AI to mediate our relationships with each other.&lt;/p&gt;
&lt;p&gt;Will we reach a point where we need to specifically seek out &quot;AI-free&quot; interactions? Will &quot;human-written&quot; become a selling point for content? Perhaps the most valuable thing in the near future will be knowing that you&apos;re connecting with an actual, unaugmented human consciousness.&lt;/p&gt;
&lt;p&gt;I don&apos;t have all the answers, but I do know that I value genuine human connection too much to see it eroded without at least raising my voice. If you want to discuss AI and GPT-4 with someone who&apos;s spent a substantial amount of time researching and working with it, but who doesn&apos;t unconditionally drink the Kool-Aid... let me know.&lt;/p&gt;
&lt;p&gt;At least you&apos;ll know it&apos;s really me talking.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/the-death-of-genuine-conversation-in-the-age-of-ai.png" medium="image"/><enclosure url="https://xopla.net/posts/the-death-of-genuine-conversation-in-the-age-of-ai.png" type="image/png" length="50000"/></item><item><title>NeRF Segmentation as Distilled Feature Fields</title><link>https://xopla.net/posts/nerf-segmentation-as-distilled-feature-fields/</link><guid isPermaLink="true">https://xopla.net/posts/nerf-segmentation-as-distilled-feature-fields/</guid><description>NeRF technology achieves object segmentation with DFF, allowing separate control of captured 3D elements. Could Apple be next to adopt it?</description><pubDate>Mon, 06 Jun 2022 10:37:00 GMT</pubDate><content:encoded>&lt;p&gt;I&apos;ve been raving about how Neural Radiance Fields (NeRF) are better than using polygons and photogrammetry for capturing the essence of the world. But there was one thing I was really looking forward to: object segmentation - separating different elements like a plant from the dirt, from the pot, from the table...&lt;/p&gt;
&lt;p&gt;Well, it&apos;s happening! NeRF object segmentation, or &lt;a href=&quot;https://pfnet-research.github.io/distilled-feature-fields/&quot;&gt;Distilled Feature Fields&lt;/a&gt; (DFF), is now a reality!&lt;/p&gt;
&lt;p&gt;In recent demonstrations, you can see how an ML model can detect specific elements like a flower and make it possible to modify it from any angle. You could even remove it entirely by zeroing the point density within that segment.&lt;/p&gt;
&lt;h2&gt;Why this is huge&lt;/h2&gt;
&lt;p&gt;This means we can potentially split objects apart and animate them individually, reclaiming some of that control we lost by moving away from polygons. While there&apos;s still some way to go, considering how fast this field is moving, I wouldn&apos;t be surprised if we started seeing implementations in 3D software like Blender soon.&lt;/p&gt;
&lt;h2&gt;Apple&apos;s potential move&lt;/h2&gt;
&lt;p&gt;I have a strong feeling Apple is eyeing this technology. It would be classic Apple to endorse something like this and package it in a proprietary format, especially since it provides a &quot;richer&quot; spatial representation. This could be a key differentiator for their Vision Pro headset.&lt;/p&gt;
&lt;p&gt;The speed at which NeRF technology is evolving is mind-blowing. Can&apos;t wait to see where this goes!&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/nerf-segmentation-as-distilled-feature-fields.png" medium="image"/><enclosure url="https://xopla.net/posts/nerf-segmentation-as-distilled-feature-fields.png" type="image/png" length="50000"/></item><item><title>Neural Radiance Fields exploration</title><link>https://xopla.net/posts/neural-radiance-fields-exploration/</link><guid isPermaLink="true">https://xopla.net/posts/neural-radiance-fields-exploration/</guid><description>Explore Neural Radiance Fields (NeRF), a groundbreaking 3D capture technique that captures light behavior instead of polygons, revolutionizing realistic 3D modeling.</description><pubDate>Sun, 29 May 2022 07:21:00 GMT</pubDate><content:encoded>&lt;p&gt;I’ve lately been exploring a new kind of 3D capture technique. One where you don’t capture polygons, but actual understanding of light and reflection. (Notice the light reflecting off the leaves)&lt;/p&gt;
&lt;p&gt;It’s called a Neural Radiance Field or a NeRF. It’s still a bit fiddly to work with and hard to use in practice, but I’m 100% sure this is the future of hyper realistic 3D. This technique takes us one step closer to capturing the essence of objects, not just the surface.&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/posts/neural-radiance-fields-exploration.png" medium="image"/><enclosure url="https://xopla.net/posts/neural-radiance-fields-exploration.png" type="image/png" length="50000"/></item><item><title>My journey through 10 years at Molamil/Manyone</title><link>https://xopla.net/posts/my-journey-through-10-years-at-molamil-manyone/</link><guid isPermaLink="true">https://xopla.net/posts/my-journey-through-10-years-at-molamil-manyone/</guid><description>What I thought would just be a student job, turned out to be quite a bit more. One day you wake up and realize how much has changed.</description><pubDate>Sun, 10 Apr 2022 10:54:00 GMT</pubDate><content:encoded>&lt;p&gt;I haven’t really thought this post through, but somehow I’d feel a bit empty if I didn’t try and put down some words to acknowledge the journey. People tell me 10 years is a long time. Maybe there’s some inspiration to get in there or maybe some insights, but I’ve warned you now 😄 It’s gonna be long. (This also isn’t the official Molamil bio. I’ll leave that to Jorge to write some day. This is just me.)&lt;/p&gt;
&lt;p&gt;10 years ago, I was studying 2nd semester Interaction Design at DMJX and some funky fellows came to teach us Flash. Jorge, Ramiro and Patrik had a company called Molamil where, it turned out, they secretly did all the cool interactive stuff for the big danish agencies. Molamil never put itself in the spotlight, so we hadn’t heard of them in class. They just showed up and tried to teach us Actionscript. I’d dabbled with code at that point, but was probably more into Motion Graphics, I thought. Turned out I was wrong.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://xopla.net/assets/1649586467694.jpg&quot; alt=&quot;Business cards from Molamil&quot; title=&quot;Credit: Nikolaj Stausbøl/Molamil&quot; /&gt;&lt;/p&gt;
&lt;p&gt;3 weeks later I signed as a student helper. I free scraped Toyota wheels and removed shadows so they could get animated and slow and steady I started to do more and more coded work myself. Think about banners whatever you want, but if you wanted to learn code, they were a brilliant way to do so. They were like small puzzles to solve, requiring you to balance the size and be as creative as possible within a small space. If you remember the Tivoli banners, that was us 🤗 I really had the best teachers in Ramiro, Jorge, Patrik, Abel, Martin, Jakob, Mathias and Chris ❤️ (Yes, it was a boys club back then)&lt;/p&gt;
&lt;h2&gt;Breaking the system&lt;/h2&gt;
&lt;p&gt;Luckily we didn’t find it fulfilling to make banners and campaign sites our entire business. In stead we wanted to get closer to the end client rather than being the tool of the agencies. Enter the design award shows: As it happens, when you are one of the only digital/interactive companies in the country, you end up building most of the interactive work.&lt;/p&gt;
&lt;h2&gt;Here’s how I think it went:&lt;/h2&gt;
&lt;p&gt;Agencies like to win awards, and we worked with a lot of them. At some point we started requesting to get credited in the award shows.&lt;/p&gt;
&lt;p&gt;Most agencies bring their clients to the award show as a treat. This incidentally gives the client a really good idea of who actually made their project and even more importantly; who made the work of their competitors.&lt;/p&gt;
&lt;p&gt;Because of our position as one of the only digital houses in a city of communication houses, as soon as we began getting credited, we were pretty much part of winning everything! It became so crazy that one year, it started getting awkward. We were the digital partner on all the competing projects within the same categories. When our own website was nominated and won as “Molamil,com, by Molamil, for Molamil” next to other projects we’d also done, we couldn’t hold it back. We felt like we’d broken the system.&lt;/p&gt;
&lt;p&gt;Since then we turned our heads towards building for clients. We build apps and websites. We of course wasn’t so alone in the digital domain anymore. There are some amazing studios in Denmark, pulling off amazing work which we now competed with. To be able to compete, our team grew with new friends in Anne, Christina, Thomas, Joachim and Kasper, whom all has meant such a big deal in my personal life too. It started to feel like a Mola-family.&lt;/p&gt;
&lt;h2&gt;Creative Technology&lt;/h2&gt;
&lt;p&gt;At this point I’d also returned home from internships in London (Unit9) and USA (TBWA/Apple) and I remember meeting a guy who had an amazing job title: “Creative Technologist”. I had no idea what he was really doing and I’d never heard of anyone in Denmark called that, but I could see how clever the title was for my ideas. I signed full time contract with Jorge as a Creative Technologist. In the beginning lots of people laughed at that title 😄 But I must just say, I’ve noticed many friends and colleagues in other companies has adopted it too (some only recently), so joke is on the industry 😉 Still I’m not sure what the title actually stand for. I feel like we all practice it very differently.&lt;/p&gt;
&lt;p&gt;Initially the Creative Technology I got to practice was more akin to UX design. Molamil didn’t really have UX as a practice at this point, so I leaned into this and invented some rules. Later on, way greater UX’ers than me joined in, but for a period I got to think about creating creative architectures and technical prototypes for the likes of McKinsey, Radio24Syv (RIP), Forsvaret, Ørsted, Experimentarium etc. Anders, Anders, Morten and Anne Leigh joining was a big part of this transition!&lt;/p&gt;
&lt;p&gt;I also started teaching at DMJX myself, the same course as Jorge and Ramiro taught. I’ve been teaching there every year since then and lightning struck again when I taught Caroline who turned out to be an absolute master of code and UX 🙏&lt;/p&gt;
&lt;h2&gt;Finding a creative outlet&lt;/h2&gt;
&lt;p&gt;Molamil was at all times a small team of max 14. We reflected a lot on what we in general wanted to do. At one point we took in an intern from Hyper Island, specialised in creative workshop facilitation, who later worked with us full time and is now my best friend. His name is Habla and he transformed mine and everybody else&apos;s way of thinking about work. He later transformed himself, pivoted to be an expert developer and designer and is now a full time magician/unicorn in a startup, but that we got some years working together meant the world to me.&lt;/p&gt;
&lt;p&gt;This was the first time I started reflecting on if I wanted something else. I remember having many talks with Jorge about if it was possible to have more creative storytelling outlets through Molamil. I wanted to work with actors and directors and I wanted to create projects that made very little financial sense, but would utilise our skills for other things too. Jorge didn’t flinch when he said that was ok with him which to this day still blows my mind. (In retrospective, our current strong position in the XR space heavily points back to this moment, so financially long term it is paying out to do art!)&lt;/p&gt;
&lt;p&gt;This was the same year as VR broke through with the Oculus DK1 and I found some new friends in the crazy “no compromise” art company Makropol. With them I build a number of side projects, culminating with the massive VR installation Anthropia (where I also met Karina, that for a good while did amazing game development and UX work with us. Now she works in the coolest games company, Triband) and part of the engine behind their End of Night experience. It&apos;s also how I found friends in the young company Khora, whom I&apos;ve since been greatly inspired by. From then on, doing technology art projects within or next to Molamil became the everyday.&lt;/p&gt;
&lt;h2&gt;Pivoting&lt;/h2&gt;
&lt;p&gt;At one point, I heard about a crazy project, I had to be part of: An AR fashion show for the brand WeAreTheFaces at Copenhagen Fashion Week, and this is where I met Hannah. She’s without a doubt the most transformative person in the creative technology world that I have ever had the chance to meet, and to mine (and Molamil’s) luck she wanted to work with us. Together we put more focus on technology at Molamil. We created a new division called MolaLAB with the focus on exploring and applying new drivers within emerging tech before they become widely applicable. Around this time we’d also been joined by Dan who tamed and completely transformed our design practice to completely new heights and Justus, who somehow manage to effortlessly mix design and technology in everything he does.&lt;/p&gt;
&lt;p&gt;Probably the craziest thing we did in MolaLAB was X-Ray Fashion in 2018 where we were contacted by another Danish VR company called MANND. The two owners, Maria and Signe, reached out to us with a crazy idea: Can we build a physical VR installation with physical sensorial surfaces and effects, that can be shipped to Venice Film Festival in 2 months? The whole process of building this installation is worth making an entire documentary about. The director was Francesco Carrozzini and the client was the World Bank and the cofounder of Microsoft, Paul Allen.&lt;/p&gt;
&lt;p&gt;It was such an undertaking by everybody involved, but we did it and in September we shipped a 2 ton heavy installation to Venice and Hannah and I traveled there ourself to facilitate it. After some bug fixes and a few days of observation I could relax, drink sangria in the sun and fall in love with Maria, the most hard working visionary woman I know in the tech industry (and who continuously bless me with new fun and daring tech challenges ❤️) I feel very blessed to have experienced Venice in this special setting. We and the other artists and directors were the only people roaming the entire island after 9pm the whole week. An experience impossible to compete with outside of the art world.&lt;/p&gt;
&lt;p&gt;When we got home we did a few more special projects. We did an AR Opera called Silent Zone w. Tue Biering and Louise Alenius and a couple other VR experiences and sensor based installations, but Hannah and I started talking about what next step would be. We wanted to use XR to drive change within bigger organisations and impact the world, but to do that we would need to rethink Molamil. I remember talking with Jorge about it in Kings Garden and he told me to “just wait a little”.&lt;/p&gt;
&lt;h2&gt;Reaching global w. Manyone&lt;/h2&gt;
&lt;p&gt;Again the mola-timing was perfect and a few months later we were part of starting Manyone, a strategic design and technology hybrid aiming to take on the biggest consultancies in the world. With applied technology in focus. Basically everything I could have dreamt of. Once more the ceiling lifted above us right when we needed it. We’ve met an ocean of new talented friends in Denmark and across the world. Hannah, Andrew and I are leading an amazing team of bright minds within emerging technology and we get to apply strategy around technology on world scale.&lt;/p&gt;
&lt;p&gt;The next big leap for us is still in the works, but I have a good feeling it leans heavily towards a mix of Spatial Computing (XR), strategic creativity and Data Science for communication.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Hint! Reach out to me or Hannah if you are passionated about storytelling with data!!!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This is the end for my post, but in no way the end for the mola-journey. The spirit of a small tight knit team lives on in our team and the teams around us in Copenhagen and around the world. The passion and love for well thought through craftsmanship is spreading like wildfire. We have the chance to form a global work environment however we want. The only top-down requirement we are met with is, to strive for doing so with a humanity perspective in mind.&lt;/p&gt;
&lt;p&gt;I don&apos;t believe in luck or fate, but I believe in rewarding good intentions, solid work and dedication. I&apos;ve definitely felt rewarded by my work and I strive to pay that forward through my own privilege. I genuinely think work places should strive to be a family. Not everybody would ofc. want that, but I wouldn&apos;t have had it anyway else myself. It&apos;s what kept me asking for more at Molamil 🤗&lt;/p&gt;
&lt;p&gt;Ps. It’s also my birthday today. 🎉 31 years. Yeah, I signed the contract on my birthday...&lt;/p&gt;
</content:encoded><dc:creator>Nikolaj Stausbøl</dc:creator><media:content url="https://xopla.net/assets/1649586467694.jpg" medium="image"/><enclosure url="https://xopla.net/assets/1649586467694.jpg" type="image/png" length="50000"/></item></channel></rss>