The screenshot was to demonstrate what blocked posts look like. I scrolled past the posts of my friends since I didn't want to dox them, but organic posts do show up near the top of my feed.
The goal isn't to scroll through nothing, but rather have a clean feed that shows me just my friends and nothing else.
When you press "For you" at the top in the Instagram app (or the logo if it doesn't fully load), you can switch to "Following" and it will show you posts from only your friends from the past 30 days, and if you're using Instagram in the browser, you can bookmark https://www.instagram.com/?variant=following if you want it to be the default page you go to.
I've been increasingly frustrated by the limitations of standard number inputs, especially when dealing with large values or numbers with units. Most applications simply provide a basic number box, which falls short in many scenarios.
To address this, I've created SiMin, a lightweight format for writing and parsing big numbers using SI prefixes with an emphasis on readability.
The FAST feature detector is an algorithm for finding regions of an image that are visually distinctive, which can be used as a first step in motion tracking and SLAM (simultaneous localization and mapping) algorithms typically seen in XR, robotics, etc.
> Is there TPU-like functionality in anything in this price range of chips yet?
I think that in the case of the ESP32-S3, its SIMD instructions are designed to accelerate the inference of quantized AI models (see: https://github.com/espressif/esp-dl), and also some signal processing like FFTs. I guess you could call the SIMD instructions TPU-like, in the sense that the chip has specific instructions that facilitates ML inference (EE.VRELU.Sx performs the ReLU operation). Using these instructions will still take away CPU time where TPUs are typically their own processing core, operating asynchronously. I’d say this is closer to ARM NEON.
> Up to 200x Faster Inner Products and Vector Similarity — for Python, JavaScript, Rust, C, and Swift, supporting f64, f32, f16 real & complex, i8, and binary vectors using SIMD for both x86 AVX2 & AVX-512 and Arm NEON & SVE
> The SIMDe header-only library provides fast, portable implementations of SIMD intrinsics on hardware which doesn't natively support them, such as calling SSE functions on ARM. There is no performance penalty if the hardware supports the native implementation (e.g., SSE/AVX runs at full speed on x86, NEON on ARM, etc.).
> This makes porting code to other architectures much easier in a few key ways:
> The FAST feature detector is an algorithm for finding regions of an image that are visually distinctive, …
Is that related to ‘Energy Function’ in any way?
(I ask because a long time ago I was involved in an Automated Numberplate Reading startup that was using an FPGA to quickly find the vehicle numberplate in an image)
What you are thinking of operates at a different level of abstraction. Energy functions are a general way of structuring a problem, used (sometimes abused) to apply an optimization algorithm to find a reasonable solution for it.
FAST is an algorithm for efficiently looking for "interesting" parts (basically, corners) of an image, so you can safely (in theory) ignore the rest of it. The output from a feature detector may end up contributing to an energy function later, directly or indirectly.
The goal isn't to scroll through nothing, but rather have a clean feed that shows me just my friends and nothing else.