Skip to main content


Thank you for visiting my blog!

Brief Overview  Hello! Thank you for visiting my blog! My name is Kateryna, and I'm based in Kyiv. Here, I will be sharing my thoughts. My main focus is on QA, automation and writing code in Python. I also have some basic knowledge about Arduino . In the past, and I created my own traffic light that displayed automation testing results based on Jenkins status. However, that was quite a while ago, so let's start fresh. Right now, I'm aiming to concentrate on technologies that are more up-to-date and relevant in today's landscape. Like AI, ML, etc.
Recent posts

MediaPipe Journey: Exploring Pose & Hand Landmark Detection

Hey everyone! I've recently dived into the fascinating world of MediaPipe, focusing on two intriguing aspects: Pose Landmark Detection and Hand Landmarks Detection. The potential applications are vast, and I'm thrilled to share my initial ideas: 1️⃣ Posture Alert System : I'm developing a script that will alert me if I've been sitting for too long. This could be a game-changer for maintaining better posture and health during those long work sessions. 2️⃣ Hand Detection for Virtual Music : Just for fun (for now), I'm working on a script that turns hand movements into piano playing. Each section of the screen corresponds to a different musical tone, allowing me to play music without a physical piano. Stay tuned for more updates and feel free to share your thoughts or similar projects you're working on! 💖 Backstage:

Soil moisture sensor (hygrometer) and KISS

I always tell everyone about KISS (keep it simple stupid). Do everything as simply as you can. If it can be simpler, make it simpler. Today, I realized I went overboard. I lied. Because today I connected flowers to an Arduino to see when they need watering.  Red means it's dry and needs watering.  Blue is okay.  Green is all good.  It's very easy to add a voice so the flower would speak from a speaker saying "water me."  Or to add automatic watering so the water from a bottle would pour itself through a tube. Example here .  But isn't that over-engineering? And yet, I preach KISS... 😄 Elements: (The total cost is slightly over 1 euro) Demo (testing on Production Env 🙀): Please share your thoughts.  Would you want this in your home, or is it too much? 💦😂😍

Arduino UNO and Python

When we talk about microcontrollers, like Arduino, the first thing that comes to mind is "C/C++". C/C++...  To write something complex, one needs to know C/C++. But is that really the case? I just want to share how I do everything I want with Arduino using Python. Why Python?  Why choose Python, you might ask? One of the reasons is the availability of numerous libraries that are already developed, making integration with controllers super fast. For instance, I've integrated MediaPipe with my Arduino. The result? The LCD displays hand positions, and using 5 LEDs, I can indicate the number of fingers I'm showing with my hand. When I show 0 fingers, all lights are off.  For 1 finger, 1 LED lights up.  For 2 fingers, 2 LEDs light up.  For 3 fingers, 3 LEDs light up.  For 4 fingers, 4 LEDs light up.  And for 5 fingers, all 5 LEDs light up. So, there you have it: a simple demonstration of integrating MediaPipe with Arduino using just one script. Could you achieve this as qu

Python MediaPipe (Hands)

Hi! Today, I want to delve into something truly exciting in the world of computer vision: MediaPipe. Developed by Google, MediaPipe is a framework that has been revolutionizing the way we interact with and understand visual data.  There are 11 different modules - drawing_styles: Customize styles for drawing detected objects and landmarks. drawing_utils: Utility functions for drawing on images and videos. face_detection: Detect and locate faces within images and video streams. face_mesh: Analyze the contours and features of a face. face_mesh_connections: Define the connections between face landmarks. hands: Track and analyze hand gestures and positions. hands_connections: Connect landmarks in the hands for visualization. holistic: Comprehensive solution for analyzing face, hands, and body posture. objectron: 3D object detection and spatial understanding. pose: Detect and analyze human body postures. selfie_segmentation: Segment a person fr

Tell me what to do, and I will do it.

This month, I have some free time, and I'm attending many IT events.  I've noticed a trend that now everyone wants ideas. If, about five years ago, it was cool just to know a technology or a programming language even a little, that was enough. Companies wanted you. Now, because the world has become fast-paced, everyone wants ideas and solutions. There are now many startups and hackathons (even Nova Poshta, a Ukrainian Company that deals with the delivery of goods, put up a huge poster near my house and organized an IT hackathon). Often, large companies want innovation and solutions. So, if before you could come to a Company and follow instructions or ask a manager, "what should I do?" and just execute step-by-step what you were told, now that DOESN'T work.  Because nobody knows what needs to be done.  Nobody knows which framework to choose or which technology.  Often you need to take a few, compare, choose, and propose something.  Find a solution, come to your te

How to Create and Deploy a Static Website in 5 Simple Steps from Scratch Using GitHub Pages (Free)

This guide will walk you through creating and deploying a simple website using GitHub Pages. Steps: 1. You Should Have a GitHub Repository Before you begin, make sure you have a GitHub account and a repository where you will store the files for your website. 2. Basic Settings Read and Write Permissions: - Go to the "Settings" tab within your repository. - Click on "Actions" and then "General." - Workflow Permissions section. Select "Read and write permissions" and "Allow GitHub Action to create and approve pull request." Build and Deployment Settings: - Go to the "Settings" tab within your repository. - Click on "Pages." - Under "Build and deployment," select the branch you want to deploy (for example, Branch -> Main). 3. Create GitHub Configuration File GitHub Actions allows you to automate, customize, and execute your software development workflows right in your GitHub repository. You can create custom

AI Image

I started my acquaintance with artificial intelligence for images ( DALL·E 2 , Midjourney , and ).  This is not exactly "my thing," since I'm not involved with drawing at all. But anyway the feelings are mixed. The pros: You can quickly generate a beautiful image. Very beautiful colors and nice silhouettes. For those who constantly need some images for social networks - now it's very easy to do it all. If the image is in the style of a "real photo," it's like creating a new reality. The cons: When I watch a person draw, I always think about why they chose that color. Why they chose that pattern. What stands BEHIND the drawing. What was invested in it. There seems to be more meaning and more energy there. An image generated by artificial intelligence is just generated content that was "created" based on my promt and various other images that already exist but is modified and changed by the program. This is very beautiful. It's ve