Author: Jorge Iglesias

  • A Complete Workflow for Computer Product Photography

    A Complete Workflow for Computer Product Photography

    Reading Time: 7 minutes

    I’ve been passionate about PC building since the early-to-mid 1990s. I still remember upgrading my dad’s 486DX2—installing a Cyrix 586 CPU, adding RAM, and upgrading the hard drive multiple times to keep that computer running. Back then, documenting those upgrades with photos never crossed my mind. Even the first computer I built from scratch, in the early 2000s, only has a few scattered photos. They lack the level of detail I wish I had today, but back then, digital cameras weren’t as accessible, and computer product photography wasn’t something I considered.

    Fast forward to 2017, when I reignited my passion for PC building and restoring older computers. With advancements in technology—not just in hardware but in tools for creating and sharing digital content—I realized the importance of documenting my work. Computer product photography became a critical part of that process, allowing me to showcase the systems I restore in a professional way. Today, I immerse myself in every aspect of technology, from building and restoring to coding, designing, and sharing my projects through photography. This focus on computer product photography allows me to highlight my efforts, share my passion, and preserve the story of each build.

    Documenting computers isn’t just a simple point-and-shoot process. It’s a deliberate workflow—a series of steps that involves planning, capturing, and editing, each as important as the next. This computer product photography workflow ensures that my restored systems look professional, my work is preserved, and my passion reaches others through platforms like social media and my website.

    Why Documenting Matters

    For me, the real joy comes from being hands-on with computers—whether it’s building, upgrading, or restoring them. I love the process of assembling components, installing the operating system, and ensuring everything works perfectly by configuring the right drivers. Once that’s done, I might play a few games to test the system for a week or two, but soon, it’s on to the next project.

    Without documentation, though, all that effort becomes a fleeting memory. That’s why I make it a priority to capture the process. I take a lot of photos—before and after shots for restorations, detailed images of the computer case, motherboard, and individual components. This isn’t just for aesthetics; it’s also about creating a record of the computer’s build, whether it’s using original parts or upgraded components.

    Having detailed photos and notes serves multiple purposes. It helps me keep track of what’s inside each system, maintain an inventory of parts, and log what I spent on components. This information is invaluable if I decide to sell a part or the entire system—because I already have a visual and written history ready to go. It also makes it easy to share my builds on platforms like Facebook, Instagram, or Twitter, showcasing my work to a broader audience.

    Preparing for the Shoot

    Preparing for a photoshoot starts with a lot of planning. First, I think about what I’m shooting and why. Am I documenting new parts with their boxes or used components? Will I be taking before-and-after shots of a restoration, or focusing on the final build? Each purpose dictates a different approach, so clarity is key from the start.

    Next, I consider the setting. Where will I take the photos? Fortunately, I have a dedicated space in my house that works well for photographing computers and components. I’m also lucky to have some equipment on hand, like LED lighting and a ring light (which I repurpose as my tripod). These tools help create consistent lighting and reduce shadows, but sometimes I experiment with natural light depending on the mood I want to capture.

    Background is another crucial factor. A clean, uncluttered background keeps the focus on the computer or components, so I often use a white colored sheet as a portable backdrop. I also plan the framing and angles I’ll use, deciding which parts of the build to highlight. For instance, I might want to showcase the interior layout, upgraded GPUs, or unique case designs.

    Equipment preparation is just as important. I ensure my camera’s memory card is clear, batteries are fully charged, and lenses are clean and ready to go. I typically choose lenses based on the type of shots I want—for wide views, a standard zoom works well, but for detailed component shots, I’ll use a macro lens. I also keep a spreadsheet on my iPad Pro handy to track my camera settings (like aperture and shutter speed) so I can replicate or adjust the setup for future shoots.

    Finally, I organize my workspace to minimize distractions. This includes having all the components or computers laid out neatly, cleaning up any dust or smudges, and double-checking that everything I want to photograph is easily accessible. Good organization not only saves time but also ensures that every detail of the shoot is captured perfectly. With all the preparations complete—from planning shots to organizing equipment—I’m ready to bring my vision to life. Capturing the photos is where creativity and preparation meet, and it’s my favorite part of the process.

    Capturing the Photos

    Once my preparations are complete, it’s time to bring the plan to life. I use the checklist I created during setup as a guide to ensure I capture every shot I’ve planned. This includes standard angles, like the front, back, and sides of the computer, as well as more creative compositions. For example, I take diagonal shots from the front or back corners and remove the side panel to photograph the internal components. These varied perspectives help showcase the computer’s design and functionality.

    Retro computers, in particular, can be visually understated, so I make it a priority to highlight the details that make them unique. Close-ups of buttons, stickers, and ports help draw attention to the finer elements of the build. These shots not only add visual interest but also give viewers a clearer sense of the computer and its components, even if they’ve never seen them in person. By experimenting with creative angles, I aim to make even plain-looking systems feel dynamic and engaging.

    For equipment, I rely on my Canon 80D and three lenses: a standard zoom for wide shots, a macro lens for close-ups, and a wide-angle lens for full-system photos. The Canon Connect iOS app allows me to adjust settings and take photos remotely from my phone, making the process more efficient and precise. This flexibility is especially helpful when I’m focusing on intricate details or working in tight spaces.

    Before diving into the full shoot, I always take a few test shots. These help me fine-tune the lighting, focus, and framing, ensuring everything looks just right. While test shots are invaluable, the real magic often happens in post-processing, where I refine and enhance the images to bring out their best qualities. By combining a thorough plan with careful execution, I’m able to produce polished, professional photos that truly showcase each computer’s story.

    With the photos captured, the next step is to refine them into polished visuals that truly showcase the details of each build.

    Post-Processing

    Post-processing is where the magic happens, but it’s also an area where I’m still learning the ropes. Currently, I use the Adobe Photography Plan, which gives me access to Lightroom and Photoshop. While these tools are powerful and user-friendly, I’m not a fan of paying subscription fees, especially since I don’t use them every day. That’s why I’ve recently started exploring Darktable, an open-source alternative to Lightroom. Darktable has a steeper learning curve, but as someone who wants to deepen my photography skills, it feels like a worthwhile investment of time.

    Regardless of the software, my post-processing workflow focuses on a few key tasks. First, I adjust the brightness and contrast to ensure the photo is clear and visually appealing. Then, I straighten the image if needed, which is particularly important for computer photos to maintain a clean, professional look. Next, I apply a watermark to protect my work and maintain a consistent brand identity. Finally, I export the photo in the appropriate resolution and format for its intended use—whether it’s for my website, social media, or another platform.

    One area I’m experimenting with is batch processing. While it’s tempting to automate the editing process, I find that many photos require individual attention, especially when it comes to fine-tuning details like color balance or cropping. My goal is to strike a balance between efficiency and quality, perhaps by developing a hybrid approach where I apply basic edits in bulk but still review each photo for manual adjustments.

    Post-processing isn’t just about making photos look better; it’s about bringing out the best in each shot. Whether it’s emphasizing the shine of a retro case or enhancing the colors of a motherboard, the editing process helps tell the story of the computer in a way that raw images simply can’t.

    Once the images are edited and polished, they’re ready to be shared. For me, publishing these photos isn’t just about displaying my work—it’s about connecting with others and preserving the story of each computer I restore.

    Publishing and Sharing

    After completing the post-processing, the next step is to share the results. Right now, my primary platform is this website, where I showcase both the photos and detailed information about each computer. It’s a space to document my work, reflect on my progress, and share my passion for computer restoration with others who might have similar interests.

    In the future, I’d love to expand beyond my website. Platforms like Instagram and marketplaces such as eBay or Facebook Marketplace could be great opportunities—not just for sharing my photography but also for connecting with potential buyers if I ever decide to sell my computers. These platforms allow for more interaction and visibility, turning each post into a way to engage with the broader tech community.

    That said, one of my biggest challenges right now is organization. Deciding where to save original files versus post-processed images and maintaining a clear folder structure can be overwhelming, especially as my library grows. I’m working on creating a system that keeps everything accessible and well-labeled, so I can find and repurpose content easily when needed.

    Ultimately, the goal of publishing my work is twofold. First, it’s a personal archive—a way for me to revisit my projects and see how far I’ve come. Second, it’s a way to inspire and connect with others who share a love for retro tech, computer components, and creative photography. As I continue refining my workflow, I’m excited to see how sharing my work evolves and the kinds of connections it might foster.

    Publishing my photos is the culmination of everything I love about this process: the creativity, the technical challenges, and the joy of sharing my passion with others. Whether I’m showcasing a fully restored retro PC or diving into the details of a motherboard, each post is a chance to preserve the story of a computer and connect with a community that shares my enthusiasm. As I continue refining my workflow and exploring new platforms, I’m excited to see where this journey takes me—both as a photographer and as someone who loves breathing new life into old technology.

    Each photo I take is more than just an image—it’s a way to document the countless hours spent restoring a piece of technology to its former glory. Sharing these moments allows me to connect with others who appreciate the history and artistry of computers, while also building a personal archive that inspires me to keep growing.

  • Why Restoring Computers Isn’t as Simple as It Seems

    Why Restoring Computers Isn’t as Simple as It Seems

    Reading Time: 5 minutes

    I’m passionate about computers, anything and everything about them interests me. However, over the last few years, restoring computers from the 90s and early 2000s, like those running MS-DOS, Windows 3.1, Windows 95, Windows 98, and Windows XP, has been one of my favorite hobbies. This passion inspired me to create a website to showcase my builds, colliercomputers.com. While assembling cutting-edge PCs with the latest technology is exciting, I find immense joy in bringing vintage systems back to life. Some retro machines hold a special charm, and with an abundance of parts and games on the used market, the hobby is both accessible and rewarding. Restoring computers isn’t as simple as it seems. The journey can be rewarding, yet the high cost of sought-after components, recurring hardware compatibility issues, and the ongoing struggle to locate the correct software drivers add significant complexity to the process.

    The Surprising Cost of Retro PC Components

    Restoring a retro PC from the 90s or early 2000s might sound like a straightforward project—grab an old machine, swap in some parts, and relive the glory days of computing. But as I’ve discovered, it’s not that simple. What I assumed would be a budget-friendly hobby quickly revealed hidden challenges: scarce components, sky-high prices, and the unpredictable condition of decades-old hardware. If you’re a computer builder who mods or upgrades your own PC, you’re likely familiar with the cost of modern parts and the thrill of scoring a deal. I figured upgrading a retro system with top-tier components would be cheap—maybe $100 to $300 total—but I was stunned to find 25-year-old GPUs selling for over $300. These unexpected hurdles have kept me from chasing the ultimate retro setup I envisioned.

    I consider myself a savvy shopper, and with enough patience, platforms like Facebook Marketplace can be goldmines for retro computing enthusiasts. Lurk long enough, and you might find people practically giving away old computers—sometimes even in working condition. But “working condition” is a gamble. The first retro PC I snagged was a Micron running Windows 95, complete with a monitor, for just $75. It seemed like a steal for a full setup, even though the seller admitted they couldn’t power it on or troubleshoot it. With my experience, I was confident I could revive it—and I did, but not without effort. Restoring these machines often means wrestling with degraded capacitors, dusty internals, or missing drivers, turning a “quick fix” into a time-consuming puzzle.

    Once I got the computer running, I decided to upgrade it, starting with the graphics card. For a 90s-era system, the Voodoo 3DFX is iconic—a holy grail for retro gamers. But when I checked eBay, I was floored: prices for these cards often exceed $500. It’s not just the Voodoo; other high-end components from that era, like the Intel Pentium III or Sound Blaster 16, carry premium price tags due to their rarity and nostalgia-driven demand. Sure, you can still find lower- and mid-tier hardware at reasonable prices, but building a top-of-the-line retro PC is a different story. The deeper I dug, the clearer it became: restoring a high-spec system from this era isn’t just about finding parts—it’s about navigating a market where scarcity and sentimentality inflate costs beyond reason.

    Hardware Compatibility Issues

    This is no different than the hardware compatibility issues from today, but ensuring retro computer component compatibility is not as straightforward. Thankfully, the internet and more recently, A.I. tools like ChatGPT have been a tremendous help in confirming compatibility, the challenge still exists. From 1990-2010, there were so many components released and not all components have been thoroughly documented.

    One of the most common struggles involves video cards. Early ISA and AGP video cards, for example, require specific slots, whereas newer PCI and PCI-E cards are entirely different standards. Ensuring your video card matches your motherboard is critical to avoid compatibility issues.

    Another major concern is power supply compatibility. Many older computers rely heavily on power supplies with a 5V rail to power key components like the motherboard and CPU. However, modern PSUs are designed to deliver more power on the 12V rail, which can overwhelm older hardware or fail to supply sufficient power to the components that need it most.

    Having spare components will be something that help you troubleshoot and tinker when it comes to computer component combinations.

    The Struggle with Software Drivers

    Rebuilding retro computers often means investing in specific components to ensure everything works as it should. However, even with the right hardware, software drivers can pose a significant challenge. For instance, a sound card designed for a specific vendor, like a Dell-branded sound card, may not function properly with another manufacturer’s system due to proprietary drivers or firmware.

    Driver compatibility isn’t just a hardware issue—it’s also a software availability problem. Many drivers for older systems are no longer supported by manufacturers and can be difficult to track down. This means relying on community forums, archival websites, or driver repositories to find the files you need. Even then, the drivers may not always work as expected or could require extensive troubleshooting to install on legacy operating systems.

    In some cases, enthusiasts turn to open-source driver projects or hardware emulation as a workaround, but these solutions aren’t always ideal or reliable. The challenge of finding and configuring the right drivers adds another layer of complexity to retro computing but also makes the eventual success all the more rewarding.

    Component Failures and Repairs

    Sometimes, components simply fail. This isn’t unique to retro computing—even modern hardware can arrive defective, which is why manufacturers offer RMA (Return Merchandise Authorization) processes for replacements or repairs.

    However, RMA options only apply to new products under warranty. When you’re buying used parts or a second-hand computer, you won’t have the safety net of an RMA. This means you’ll need to troubleshoot issues yourself and, in some cases, complete repairs to get the component working again.

    Over time, I’ve built up an inventory of spare parts specifically for these situations. Maintaining a stockpile of components—like motherboards, CPUs, RAM, and power supplies—has been invaluable when restoring older systems. Not only does it help speed up troubleshooting, but it also reduces downtime if I need to replace a part.

    That said, building an inventory comes with its own set of challenges. Finding parts in good working condition requires patience, and keeping everything organized can be tricky. I’ve learned to label and test components as I acquire them to ensure they’ll work when I need them.

    Despite the effort, having spare parts on hand is a game-changer, especially when working with rare or mission-critical hardware. It allows me to experiment, test different configurations, and keep retro systems running smoothly, even when the unexpected happens.

    Why It’s All Worth It

    For me, restoring computers is an incredibly rewarding experience. It starts with the joy of disassembling the machine, carefully reviewing its internal components, cleaning each part, and then reassembling everything. There’s something deeply satisfying about putting it all back together and seeing the system come to life.

    Equally enjoyable is the process of configuring the operating system and drivers to achieve a fully functional setup. Getting the video card to display proper resolution and colors, or ensuring the sound card delivers crisp audio, feels like solving a complex puzzle—challenging but immensely gratifying once complete.

    And then comes the best part: putting the restored system to the test with classic games. Whether it’s revisiting old favorites or discovering hidden gems, playing on a retro machine takes me back in time. For me, there’s nothing better than the combination of nostalgia and accomplishment that comes from bringing an old computer back to life and using it the way it was meant to be.

  • 2025 – Week 9: 3D Printing Revival, Rubik’s Cube Robot, and More

    2025 – Week 9: 3D Printing Revival, Rubik’s Cube Robot, and More

    Reading Time: 2 minutes

    Welcome to my Week 9 2025 Updates! This week was packed with projects pulling me in all directions, and I loved every minute of it.

    First up, I dusted off my Prusa 3D printer after what feels like forever—maybe a year, maybe two. I can’t even remember the last thing I printed, but I’m determined to bring it back to life. I started with some basic maintenance: cleaning and lubricating the z-rods. Then, I tried printing a Benchy (the classic 3D printing benchmark), but the results were less than stellar. The printer needs some serious tuning, and I’m still troubleshooting why it’s not laying down filament properly. Fingers crossed I’ll crack the code this week—stay tuned for progress!

    Next, I’ve been sketching out plans for an ambitious new project: a robot that solves a Rubik’s Cube. After building websites for so long, I’m ready to flex my skills on something tangible and mechanical. This robot will need a vision system to ‘see’ the cube, arms and grippers to manipulate it, plus a control system, power setup, and a sturdy structure. I’m particularly excited to use my freshly revived 3D printer to create the arms, grippers, and base. It’s a fun challenge, and I can’t wait to see it come together.

    On the digital front, I spent time enhancing my site, swfl.io, by adding a new Traffic page. Phase one was straightforward: integrating Google Maps with live traffic data. Phase two is where it gets interesting—I’m using my own AI setup to scrape news content from the site and extract traffic and event-related info. The proof of concept is working, though I’ve got a few bugs to squash. This project’s got legs, and I’ll keep refining it in the coming weeks.

    Oh, and I also dipped into graphic design this week, creating a business card for my landscaper. I used Adobe Express to design it and VistaPrint to get it printed. The cards should arrive soon, and I’ll share a photo once they’re in hand.

    That’s the rundown of my Week 9 2025 Updates. It was a whirlwind of 3D printing, robot planning, web dev, and design. Let’s see what I can tackle next week!

  • 2025 – Week 8: Fixing swfl.io and Testing AI Tools

    2025 – Week 8: Fixing swfl.io and Testing AI Tools

    Reading Time: 2 minutes

    This week, I’ve been wrestling with a deployment issue on my swfl.io news aggregator site. At first, I suspected my Docker setup was the culprit, but after digging deeper, I realized the real problem was in my Nginx configuration. It still had custom rules from my old React app that didn’t play nice with Next.js, tanking the site. I spent hours banging my head against the wall—even started a second project from scratch out of desperation—but thankfully, I cracked it in the end.

    On a different note, I teamed up with my landscaper to design a business card for him. I’m also troubleshooting some email issues he’s having—Hotmail might be the gremlin here, but I’m not sure yet. We’ll need to run a few more tests to pin it down.

    I’ve been tinkering with n8n to find solid use cases, but honestly, I’m not vibing with it. I’m leaning toward switching to Python with my Ollama and DeepSeek setup instead. Web scraping and search-related ideas are at the top of my list to explore.

    As we head into the final month of Q1, I’m starting to think about upgrading my systems. Scoring a 5090 Ti feels like a long shot, but it’s definitely on my radar.

    Benchie.io is up and running, and I’ve added the ability to capture AI benchmarks, which is cool. But I’ve hit a roadblock with letting users connect to their local Ollama instances—might need to whip up a web socket or something to make it work.

    On the AI front, I’ve been testing OpenAI’s Pro plan, particularly their Deep Research features. I’m a fan of the 01 Pro plan and its voice playback—it’s a nice touch. I haven’t tried the Operator feature yet, but I’ll give it a spin later this week. Meanwhile, I’ve been loving xAI’s Grok 3—it’s holding its own against ChatGPT, maybe even edging it out. It’s got me considering ditching my $200 OpenAI Pro plan for their $20 tier. Anthropic’s Claude 3.5 Sonnet still feels worth it, though. I’m likely dropping my Cursor membership, but I’ll keep using Cline in VS Code with my Anthropic API key.


    Looking back, tech troubleshooting is half the fun of this grind. From unraveling swfl.io’s mess to sizing up AI tools, there’s always a knot to untie. Next week, I’m diving into those Python ideas—maybe scraping some data or automating a workflow. If I’m lucky, I’ll snag a hardware upgrade too. For now, swfl.io’s back online, and I’ve still got most of my hair. Call it a win.

  • 2025 – Week 4: AI Weekly Update & Weekly Progress

    2025 – Week 4: AI Weekly Update & Weekly Progress

    Reading Time: 3 minutes

    AI Weekly Update: This week was full of exciting progress on multiple fronts—from experimenting with brand-new AI tools to building out fresh websites and further refining my development processes. Below is a detailed rundown of everything that’s been happening, along with the challenges I faced and the lessons I learned.


    A.I. Updates

    DeepSeek R1 (70B Model)

    One of the most thrilling developments in this AI Weekly Update is my successful setup of DeepSeek R1 locally. By using two RTX 3090 Tis and one RTX 2080 Ti, I managed to run the 70B model smoothly. It’s been remarkably impressive in terms of its reasoning capabilities and response times, and I’m eager to test it against other AI solutions in real-world scenarios.

    I’ve been sharing my findings on X (formerly Twitter) and engaging with the AI community for feedback. Through these discussions, I’ve gained insight into how others are deploying large language models and tackling common challenges like memory optimization and prompt engineering.


    Tool Exploration

    This week, I also explored a variety of AI tools and plugins, each with its unique advantages:

    • Cursor: A coding companion that helps streamline certain tasks, though I’m still evaluating its best use cases.
    • Trae.ai: Offers an intuitive environment for quick AI-powered code generation and debugging.
    • VS Code Plugins (Continue, Cline): These extensions are invaluable for real-time code suggestions and AI-driven refactoring.
    • ChatGPT: I canceled my $20 account, as I’m pivoting to more customizable solutions. However, I recognize ChatGPT’s utility in rapid prototyping for various tasks.

    API Setup

    Thanks to the expanded array of providers available, I’ve configured API keys for Anthropic, OpenAI, DeepSeek, and Google AI Studio. Having these ready gives me the flexibility to switch between different AI backends based on the project requirements. Whether I need faster generation, deeper context understanding, or specialized features, I can pick and choose the best engine for each task.

    Go-To Stack

    Currently, my favorite combination is VS Code + Cline with Anthropic/Claude 3.5. This setup offers a robust environment for brainstorming, coding, and debugging, allowing me to iterate on features quickly. I’m consistently impressed by Claude’s ability to handle extended context, which is particularly useful for more complex development tasks.


    Website Projects

    1. naplestea.com

    One of the more exciting ventures in this AI Weekly Update is my continued progress on naplestea.com, a new website where I plan to sell tea. The tech stack includes:

    • Next.js (15.1.5)
    • Strapi (5.8.0)
    • MariaDB
    • Dockerized setup with Nginx Proxy Manager

    This marks my second major Next.js project of the month (the first being Benchie.io). After spending significant time squashing lint and build issues, I’m pleased to confirm I can now run npm run build without any errors.

    Next Steps for naplestea.com:

    1. Finalize the e-commerce flow and set up automated email sequences for customer engagement.
    2. Convert the remaining static pages into dynamic content pages using Strapi, enabling easy updates and scalability.

    2. colliercomputers.com

    I’ve also been active on colliercomputers.com, where I created a quick video showing DeepSeek R1 70B running with OpenWeb UI. This demonstration helps illustrate how local deployments of advanced AI models can be achieved on consumer-grade hardware.

    Upcoming Tasks:

    • Benchmark the Gateway 500 SE PC to understand its performance limits.
    • Restore the Dell Dimension 4700 with Windows XP, mainly for retro-compatibility testing and archival purposes.

    3. jorgeiglesias.com

    This week, I also focused on jorgeiglesias.com, though changes were minimal. My main plan is to enable cross-posting of blog content between my various websites, using tags or categories to determine relevance. This feature will improve both reach and SEO by sharing pertinent articles across multiple domains.


    Looking Ahead

    AI Agents

    A key goal for next week in this AI Weekly Update is to develop custom AI agents, primarily using Browser Use and n8n. I want to create an agent that can automatically generate tasks, helping me streamline workflows across my projects. This could be a game-changer for productivity, especially if it integrates smoothly with my existing Dockerized setups.

    E-Commerce & Email Flows

    I’m also determined to wrap up the core shopping cart functionality for naplestea.com, ensuring a polished and user-friendly e-commerce experience. Automated email sequences—welcome messages, order confirmations, and re-engagement campaigns—are top of my list.

    Systems & Benchmarks

    Lastly, I plan to establish baseline benchmarks on older PCs, documenting their performance under various operating systems. This includes measuring how well these machines can handle modern software tools, which can be particularly useful for retrofitting or specialized tasks.


    Final Thoughts

    This concludes the AI Weekly Update for Week 4 of 2025. It’s been a whirlwind of AI model experimentation, Docker troubleshooting, and Next.js refinements. Despite occasional hurdles—like persistent lint errors or configuration mishaps—I’m proud of how much progress I’ve made.

    If you’ve experimented with DeepSeek or created your own AI agents, I’d love to hear about your experiences. Sharing insights often leads to creative solutions and new collaborations!

    Thank you for tuning in, and I look forward to providing another deep dive next week. Whether you have questions, comments, or just want to share ideas, feel free to drop a line. Let’s keep pushing the boundaries of what’s possible in AI and web development.

  • Testing DeepSeek R1 70B: A Hands-On Demonstration

    Testing DeepSeek R1 70B: A Hands-On Demonstration

    Reading Time: 2 minutes

    In my latest video, I dive into the world of AI by testing the DeepSeek R1 70B model using OpenWeb UI . But before jumping into the AI action, I take a moment to showcase my hardware setup—because no AI experiment is complete without some serious computing power!

    Here’s what you’ll see in the video:


    The Hardware Setup

    • My trusty NVIDIA GPUs: RTX 2080 Ti , RTX 3090 , and the beastly RTX 3090 Ti .
    • Monitoring tools like NVTop (because keeping an eye on your system’s performance is crucial when running AI workloads).

    The AI Demonstration

    After setting up, I put DeepSeek R1 70B to the test with some fun and challenging questions:

    1. Math Time : What is 10×10?
    2. Sports Fans Rejoice : Name the top 10 NBA players of all time and rank them by scoring.
    3. History Mystery : In Greek mythology, who was Jason’s maternal great-grandfather?

    The video also highlights an important lesson: AI isn’t always perfect. I compare DeepSeek’s answers to Google’s results and point out some discrepancies. This serves as a reminder that while AI is incredibly powerful, it’s not infallible—always double-check its double-check its responses!


    A Special Note

    This is my very first video upload on the CollierComputers channel, so I’m excited to finally share this content with you all! There’s plenty more to come as I dive deeper into AI experiments and hardware setups.


    Key Takeaways from the Video

    • AI vs. Reality : A comparison of DeepSeek R1 70B’s answers versus Google’s results.
    • Hardware Setup Tips : How to monitor and optimize your system for AI workloads using tools like NVTop (Grafana wasn’t cooperating during this recording, but it’s on my list to fix!).
    • Practical Testing : A hands-on look at how modern AI models perform in real-world scenarios.

    Why You Should Watch This Video

    If you’re curious about:

    • How AI models like DeepSeek R1 70B work in practice.
    • The importance of verifying AI responses.
    • A behind-the-scenes look at running AI experiments with high-end hardware.

    Then this video is for you! It’s a fun and informative watch, perfect for AI enthusiasts and tech fans alike.


    Catch the full video here:

    Like, subscribe to CollierComputers, and let me know in the comments what you think about DeepSeek R1 70B—or share your own AI experiments! 🚀

  • 2025 – Week 3: Weekly Progress Update

    2025 – Week 3: Weekly Progress Update

    Reading Time: < 1 minute

    It’s been an exciting week, though much of my time is consumed by my day job—as it should be. Despite the busy schedule, I manage to carve out moments after hours, often while winding down in bed, to make updates on the various projects I have going. If you’ve been following along, you know I juggle quite a few projects, but I always try to prioritize the ones I believe matter most.

    This past week, I focused on cleaning up and improving SWFL.io, my local news site. One of the highlights was adding date filter logic, which makes navigating the site even more user-friendly. I’m also seriously considering migrating the site to a Next.js framework—it feels like a straightforward enough transition and would make things even more streamlined.

    On the Benchie.io front, progress was satisfying. This app, which helps me track and organize computer benchmark results, now has improved logic for dynamically adding forms and displaying results based on those inputs. I also expanded support for a few more benchmark tools, which should make the app even more versatile.

    Looking ahead, I’m planning to spend more time benchmarking and, hopefully, indulging in some old-school gaming. Oh, and I did pick up a few video cards that I’m really excited to check out—though realistically, I may not get around to those for a couple of weeks.

    Until next time, stay curious and keep building.

  • 2025 – Week 2: Weekly Progress Update

    2025 – Week 2: Weekly Progress Update

    Reading Time: 2 minutes

    Week 2 of 2025 got a bit stretched out due to the year starting on a Thursday—Week 1 ended early, and Week 2 ended up taking up extra days. In any case, here’s a look at everything I’ve been working on across my various projects:

    Day Job + Daily Updates

    I decided to begin sending brief daily recaps to my development team at work. While it takes a little extra time each evening, these updates are already helping everyone stay on the same page about progress, roadblocks, and what’s coming next.

    Franck Landscaping (Client Project)

    I spent a fair amount of time finalizing tasks for Franck Landscaping. The primary goals were to boost SEO, tidy up the navigation, and strengthen security measures. I also revamped the site logo for a cleaner look.

    In addition, this project has been slightly on hold for a bit; we recently got the sign-off on payment and design approvals, so I wrapped up remaining tasks and am gearing up to launch. Conveniently, this client also helps me with irrigation and lawn care, so it’s a win-win partnership.

    Moving forward, I’ll integrate Google Business listings and set up analytics to provide valuable data insights—helping Franck Landscaping reach more customers and track performance effectively.

    CollierComputers.com

    On the CollierComputers.com front, I experimented with recording benchmark sessions using a budget-friendly capture card—only to end up with choppy, laggy video. I’m looking into better hardware (like the Elgato HD60 Pro) to achieve smoother captures. For now, I’m logging benchmark data the old-school way, but definitely need a more efficient solution.

    Benchie

    Benchie is another side project I’ve been working on. It’s a separate application where users can input details about a computer (CPU, RAM, GPU, etc.) and record benchmark results from tools like HWiNFO32 or CPU-Z. I’ve built it using Next.js (for the front end) and Strapi (for the backend). The prototype is live and functional, though I’m planning to add more refinements and features soon.

    SWFL.io

    I spent time improving SWFL.io’s mobile experience by fixing minor layout quirks. I also broke down my web scraping scripts into individually containerized Docker images—easier to maintain, especially using Portainer. Now, each scraper can be started, stopped, or updated independently, without affecting the others.

    Final Thoughts

    All in all, Week 2 (and its extended schedule) turned out productive. I tackled multiple client updates, refined personal projects, containerized my scraping tools, and started daily dev-team updates for my main job. I still need better capture hardware for recording benchmarks, but that’s on my to-do list for Week 3. Stay tuned for more updates soon!

  • 2025 – Week 1: Weekly Progress Update

    2025 – Week 1: Weekly Progress Update

    Reading Time: 2 minutes

    What an End to the Year!
    As we kick off the the year, I’ve been making significant progress on some personal projects, which I’ll share below:

    jorgeiglesias.com

    The latest update for this site is the launch of a fresh UI. While there’s still a bit more work to do, the home page and blog are up and running. My next focus will be ensuring all my social links are properly integrated.

    One of the more challenging moments came when I was building a Next.js and Strapi v5 website. I hit a major roadblock with ESLint build issues, which proved difficult to resolve. Instead of staying stuck, I decided to pivot and spun up this WordPress site in just a few hours.

    Refreshing this site gave me a chance to enhance my WordPress skill set, and once that was done, I switched back to working on colliercomputers.com.

    colliercomputers.com

    I was excited to dive back into colliercomputers.com, another WordPress-powered site. While I haven’t been focusing on the website itself, I’ve shifted my efforts to documenting the computers I’m working with and organizing benchmark data.

    Initially, I thought a spreadsheet would be sufficient for tracking benchmark tests. However, as I started creating a template, it became clear that a spreadsheet wouldn’t scale well for what I had in mind. Over the Christmas and New Year’s break, I decided to take a different approach and built a quick application, Benchie.io, to streamline the process of recording and managing benchmark results.

    benchie.io

    Benchie.io is a user-friendly app I designed for enthusiasts, collectors, and tech-savvy individuals like myself 😉. It helps users document and track benchmark results effortlessly. Currently, users can log in, create entries for their computers, and add benchmark details from popular tools like CPU-Z and HWINFO32—with plans to support even more tools in the future.

    Whether you’re testing the limits of cutting-edge systems or exploring the performance of vintage machines running Windows 95, 98, or XP, Benchie.io provides a centralized hub to store and organize your benchmark data.

    This personal project leverages the skills I honed while working with Next.js and Strapi 5, drawing on the lessons learned from developing my jorgeiglesias.com site (which I haven’t fully launched—yet).

    I’m excited to continue expanding Benchie.io by adding new features and enhancing its functionality in the future!

    That’s it for this week’s updates! With the Christmas and New Year holidays, I had some extra time off from work, which allowed me to dive into the Benchie.io project. Now that I’m back to my full-time job, I expect to spend a bit less time on personal projects, but I’m excited to continue making steady progress in the weeks ahead.

  • 2024 – Week 51: Weekly Progress Update

    2024 – Week 51: Weekly Progress Update

    Reading Time: 2 minutes

    Weekly Update: A New Direction

    A lot of changes before the end of the week, read below to learn about the latest updates.

    What Was Accomplished

    This week, I spent significant time addressing long-standing issues on my website, JorgeIglesias.com. The site was initially created earlier this year, and while it served its purpose, I hadn’t revisited it since getting it up and running. As I worked through the updates, it became clear that the original implementation, which relied solely on React, wasn’t ideal for a multi-page site. This realization, coupled with the opportunity to upgrade to Strapi 5, led me to a new decision: to rebuild the site from scratch as JorgeIglesias.com V2.

    Here’s a quick rundown of what was resolved:

    * Fixed Matomo integration for better analytics tracking.

    * Refactored API calls for the Header and Hero sections to improve performance and maintainability.

    * Addressed blog-related issues:

    * Fixed blog post page layout.

    * Resolved sorting issues for blog posts.

    * Adjusted BlogCard size and added attributes for better presentation.

    * Fixed a 404 error on footer links and resolved a link issue within the footer.

    * Improved blog heading placement for a cleaner design.

    * Refactored the footer structure for consistency.

    * Created a new `README.md` file to document updates for V2.

    * Set up Strapi configurations, including S3 storage, and added a social component for authors.

    * Improved the deployment process for smoother updates and scalability.

    What’s In Progress

    As I worked through these fixes, I began laying the groundwork for the next iteration of the site. Currently, the following tasks are in progress:

    * Migrating to Next.js and Strapi V5: The migration to Next.js allows for server-side rendering and better performance for multi-page applications. Pairing this with Strapi V5 will provide more flexibility for managing content.

    * Configuring CORS for the API: This is critical for ensuring secure and seamless communication between the frontend and backend.

    What’s Next

    Moving forward, my focus will shift to implementing rate limiting for API requests. This is an essential feature to protect the site from abuse and maintain reliability as I continue to build out V2.