I worked at a game company in Korea doing AI research — graphics, vision, and image generation. I built the in-house image gen service there. While reading generative AI papers, I came across virtual try-on research and had a realization: people will eventually shop by seeing products on themselves, not just browsing photos of models. I started experimenting on weekends. The early results were rough, but promising enough that I left my job.
The core technical challenge: when you use image generation models to transfer someone's look onto another person, they either lose your identity or drop the style details. You ask it to transfer a specific makeup look and it gives you a completely different face, or an outfit loses its pattern and texture, or the hairstyle comes out flat. A prompt-only approach just isn't precise enough.
So I built a multi-stage pipeline — object detection, inpainting, and several other steps — to preserve your identity while accurately transferring style details.
Unlike preset filters or brand catalog try-ons, users share styles from their own everyday photos and anyone in the community can try that look on themselves with one tap. It works across three categories: beauty (makeup transfer), fashion (outfit try-on), and hair (style and color).
I launched in the US and Korea about a month ago. Still early and plenty to improve — would love honest feedback. Does the try-on quality feel convincing?
Demo: https://youtube.com/shorts/mDLkiV3D4rI iOS: https://apps.apple.com/app/looktake-share-style-with-ai/id67... Android: https://play.google.com/store/apps/details?id=io.looktake.ap...
The impact is traceable via ICMP, but also reproducible via TCP and difficult to measure via UDP. This is why monitoring tools are misleading: there is no “slowness” resulting from interface saturation; instead, there is data corruption where packets are discarded at the interface level. Therefore, if network performance is measured using those same data points, it won’t work and you won’t see any alerts.
The issue can also be replicated from the looking glass. In fact, I will attach images below, although you can also see them on the website attached to the post, as well as a more specific report
There is packet loss and probably flapping on a BGP instance, OSPF, or some IGP within Meta’s network. I believe it is between 129.134.101.34, 129.134.104.84, and 129.134.101.51. It is possible that it’s a faulty interface in a bundle or some hardware issue that a “show interface status” doesn’t reveal, which is why I’ve failed to report this problem through your NOC.
How can Meta replicate the failure?
1: Look for random MNA cluster IPs from your clients. 2: Ping from 157.240.14.15 with a payload larger than 500 bytes (a packet is more likely to get corrupted on a faulty interface if the payload increases). 3: Ping many servers from point 1.
You will see that once you find the affected upstream or downstream route combination, you will have 10-60% packet loss to the destination host.
How to fix it? Isolate the port or discard faulty hardware.
Why didn’t we see it before?
Simply put, your monitoring tools and troubleshooting protocols don’t work for these problems. The protocol is to attach a HAR file that bases its performance on window scaling and TCP RTT; if both are good, even with data loss, there’s “no problem.” Especially because that HAR file is extracted using QUIC, and QUIC is particularly good at mitigating slowness caused by data loss (since packets are retransmitted without the TCP penalty). You know what uses TCP? WhatsApp Statuses, and those are slow.
Can an MTR show where the problem is?
Generally not, this is because:
In any network route, there is a certain number of hops; for example, suppose there are 5 hops between host A and host B. To perform a traceroute, packets are sent with increasing TTL values (1, 2, 3, etc.). Each time a packet expires before reaching its destination, the transit hop reports a “TTL Time Exceeded” message, which is how the route is mapped. The problem is that these are basically point-to-point probes; it’s like pinging each hop individually. And when there’s a problem on an affected interface in an ECMP or bundle, those P2P connections won’t necessarily take the affected path. So they are unreliable; generally, you will see that the losses are produced by the final host even though the fault is in the middle. check metafixthis.com
A while ago I had a mildly depressing realization.
Back in 2010, I had around $60k. Like a "responsible" person, I used it as a down payment on an apartment. Recently, out of curiosity, I calculated what would have happened if I had instead put that money into NVIDIA stock.
I should probably add some context.
For over 10 years I've worked as a developer on trading platforms and financial infrastructure. I made a rule for myself - never trade on the market.
In 2015, when Bitcoin traded about 300 usd, my brother and I were talking about whether it was a bubble. He made a bold claim that one day it might reach $100k per coin. I remember thinking it sounded unrealistic - and even if it wasn't, I wasn't going to break my rule.
That internal tension - building systems around markets while deliberately staying out of them is probably what made the "what if?" question harder to ignore years later.
The result was uncomfortable. The opportunity cost came out to tens of millions of dollars.
That thought stuck with me longer than it probably should have, so I decided to build a small experiment to make this kind of regret measurable: https://shouldhavebought.com
At its core, the app does one basic thing: you enter an asset, an amount, and two dates, and it gives you a plain numeric result - essentially a receipt for a missed opportunity.
I intentionally designed the UI to feel raw and minimal, almost like a late-90s terminal. No charts, no images, no emotional cushioning - just a number staring back at you.
What surprised me wasn't the result, but how much modern web infrastructure it took to build something that looks so simple.
Although the app is a single page with almost no UI elements, it still required:
- Client-side reactivity for a responsive terminal-like experience (Alpine.js)
- A traditional backend (Laravel) to validate inputs and aggregate historical market data
- Normalizing time-series data across different assets and events (splits, gaps, missing days)
- Dynamic OG image generation for social sharing (with color/state reflecting gain vs loss)
- A real-time feed showing recent calculations ("Wall of Pain"), implemented with WebSockets instead of a hosted service
- Caching and performance tuning to keep the experience instant
- Dealing with mobile font rendering and layout quirks, despite the "simple" UI
- Cron and queueing for historical data updates
All of that just to show a number.
Because markets aren't one-directional, I also added a second mode that I didn't initially plan: "Bullet Dodged". If someone almost bought an asset right before a major crash, the terminal flips state and shows how much capital they preserved by doing nothing. In practice, this turned out to be just as emotionally charged as missed gains.
Building this made me reflect on how deceptive "simplicity" on the web has become. As a manager I know says: "Just add a button". But even recreating a deliberately primitive experience today requires understanding frontend reactivity, backend architecture, real-time transport, social metadata, deployment, and performance tradeoffs.
I didn't build this as a product so much as an experiment - part personal curiosity, part technical exploration.
I'd be very interested to hear how others think about:
Where they personally draw the line on stack complexity for small projects?
Whether they would have gone fully static + edge functions for something like this?
How much infrastructure is "too much" for a deliberately minimal interface?
And, optionally, what your worst "should have bought" moment was?
Happy to answer any technical questions or dig into specific implementation details if useful.