<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Articles by Emin Muhammadi]]></title><description><![CDATA[Explore technology, software development, and software testing with expert insights and practical advice from author Emin Muhammadi.]]></description><link>https://articles.eminmuhammadi.com</link><generator>RSS for Node</generator><lastBuildDate>Mon, 20 Apr 2026 05:55:24 GMT</lastBuildDate><atom:link href="https://articles.eminmuhammadi.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How to Test AI-Written Software Products: Step-by-Step Methods, Real Code Examples, and the Hidden Drawbacks]]></title><description><![CDATA[Testing AI-written software products works best when you treat the generated code as “helpful but untrusted,” then build a repeatable test pipeline that proves correctness, safety, and stability over time. The practical goal is not to confirm that th...]]></description><link>https://articles.eminmuhammadi.com/how-to-test-ai-written-software-products-step-by-step-methods-real-code-examples-and-the-hidden-drawbacks</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/how-to-test-ai-written-software-products-step-by-step-methods-real-code-examples-and-the-hidden-drawbacks</guid><category><![CDATA[Testing]]></category><category><![CDATA[QA]]></category><category><![CDATA[AI]]></category><category><![CDATA[Python]]></category><category><![CDATA[OWASP TOP 10]]></category><category><![CDATA[NIST]]></category><category><![CDATA[llm]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Sun, 15 Feb 2026 12:26:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1771158146769/8d69641f-7883-4e38-bd12-1a68d2f608fc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Testing AI-written software products works best when you treat the generated code as “helpful but untrusted,” then build a repeatable test pipeline that proves correctness, safety, and stability over time. The practical goal is not to confirm that the code runs once, but to keep it correct as requirements change, dependencies update, and real users behave in unexpected ways.</p>
<h2 id="heading-why-ai-written-code-needs-extra-testing">Why AI-written code needs extra testing</h2>
<p>AI-generated code often appears confident and neat, but it can still overlook important details needed for reliable software in production, such as input validation, error handling, edge cases, performance limits, and secure defaults. This is why it's useful to think about managing risks over time instead of just doing one-time QA. <a target="_blank" href="https://www.nist.gov/itl/ai-risk-management-framework">The NIST AI Risk Management Framework</a> suggests handling AI systems with ongoing oversight, understanding the context, measuring risks, and managing them continuously, rather than assuming you can "test it once and be done."</p>
<p>Another reason is that AI code generation can amplify a classic weakness in software teams: relying on “I read it and it seems fine.” Humans are surprisingly bad at spotting certain categories of bugs in plausible code, especially off-by-one boundaries, rounding rules, and failure paths that only appear under load or odd inputs. A strong test strategy makes those failures obvious and repeatable.</p>
<h2 id="heading-a-step-by-step-approach-that-actually-holds-up">A step-by-step approach that actually holds up</h2>
<p>First, write a small, concrete spec before you write tests. By “spec,” I do not mean a 20-page document; I mean a few sentences that define inputs, outputs, and the rules that must always be true. For example: “Totals are rounded to two decimal places,” “Discount is applied before tax,” “Negative quantities are rejected,” and “Empty carts return 0.00.” If you can’t write these rules down, the AI will guess, and your tests will accidentally encode the guess instead of the business requirement.</p>
<p>Next, do a quick threat and failure model, even for simple modules. Ask what can go wrong if an attacker or a chaotic environment interacts with this code: can it leak secrets to logs, accept malicious input, hang on huge payloads, or run dangerous shell commands. If your “AI-written product” includes an LLM feature, it’s especially useful to think in terms of known LLM app risks such as prompt injection, sensitive information disclosure, insecure output handling, and supply chain weaknesses, all of which are highlighted in the <a target="_blank" href="https://owasp.org/www-project-top-ten/">OWASP Top 10</a> for LLM Applications.</p>
<p>Then add static checks before runtime tests. AI-written code frequently imports things that don’t exist, uses the wrong method name, or returns the wrong type while still appearing reasonable. Linters and type checkers turn those into immediate, cheap failures, and they also keep future human edits from quietly degrading quality.</p>
<p>After that, write unit tests with two kinds of coverage: example-based checks and boundary checks. Example-based tests confirm the typical “happy path.” Boundary tests confirm that the code behaves correctly at the edges: empty lists, one item, very large numbers, negative values, weird Unicode, missing values, and invalid formats. This is where AI-written code most often breaks, because generation tends to optimize for the common case you described in your prompt rather than the messy cases real users create.</p>
<p>Once you have some unit tests, add property-based tests for invariants. Property-based tests don’t just check one or two examples; they generate many inputs for you and try to break your assumptions. In Python, Hypothesis is a well-known library for this style of testing, and it explicitly aims to find edge cases you did not think of and then “shrink” failing inputs down to the simplest example that still fails, which makes debugging dramatically faster.</p>
<p>After unit and property tests, add fuzzing for parsers, validators, and anything that processes untrusted input. Fuzzing is especially valuable for AI-generated code because it’s common to see optimistic parsing and incomplete error handling. If you maintain open-source infrastructure or want a model of what “continuous fuzzing” looks like, Google’s <a target="_blank" href="https://google.github.io/oss-fuzz/">OSS-Fuzz</a> describes itself as continuous fuzzing for open-source projects and supports multiple fuzzing engines and sanitizers, which gives you a sense of how serious teams operationalize fuzzing rather than treating it as a one-off activity.</p>
<p>Then, add integration tests that prove your code behaves correctly with real dependencies. Many AI-generated modules look correct in isolation but fail when they meet actual databases, real HTTP timeouts, real character encodings, or real cloud permissions. Integration tests also catch problems that mock-heavy unit tests can miss, such as incorrect SQL assumptions or wrong retry behavior.</p>
<p>Finally, turn every discovered bug into a regression test. This sounds obvious, but it is the difference between a test suite that grows smarter and one that remains a static checklist. When AI-written code fails in production, your best defense is to make sure that specific failure can never quietly return.</p>
<h2 id="heading-worked-example-a-tiny-invoice-total-module">Worked example: a tiny “invoice total” module</h2>
<p>Imagine an AI assistant generated a small pricing function for your product. It passes a quick manual check, so it ships. A month later you get support tickets: “Sometimes totals are wrong by 0.01,” “We got negative totals,” and “Discounts over 100% weren’t blocked.”</p>
<p>Here is a simplified version of that kind of AI-generated code:</p>
<pre><code class="lang-python"><span class="hljs-comment"># invoice.py</span>
<span class="hljs-keyword">from</span> dataclasses <span class="hljs-keyword">import</span> dataclass
<span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> Iterable

<span class="hljs-meta">@dataclass(frozen=True)</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">LineItem</span>:</span>
    sku: str
    unit_price: float
    qty: int

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">total_amount</span>(<span class="hljs-params">items: Iterable[LineItem], tax_rate: float, discount_pct: float</span>) -&gt; float:</span>
    <span class="hljs-string">"""
    Returns total amount including tax after discount.
    """</span>
    subtotal = sum(i.unit_price * i.qty <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> items)
    discounted = subtotal * (<span class="hljs-number">1.0</span> - discount_pct / <span class="hljs-number">100.0</span>)
    total = discounted * (<span class="hljs-number">1.0</span> + tax_rate)
    <span class="hljs-keyword">return</span> round(total, <span class="hljs-number">2</span>)
</code></pre>
<p>What’s wrong with it is not dramatic, which is exactly why it’s dangerous. It uses floats for money, it does not validate anything, and it silently allows negative quantities or discount percentages that create nonsense totals. It also rounds at the end, which may or may not match your accounting rules, and it gives you no structured error when inputs are invalid.</p>
<p>Now let’s test it with pytest. Pytest is popular partly because its fixture system lets you create reusable, named setup logic that can be shared across tests and scopes, which helps keep tests readable as the suite grows.</p>
<pre><code class="lang-python"><span class="hljs-comment"># test_invoice_unit.py</span>
<span class="hljs-keyword">import</span> pytest
<span class="hljs-keyword">from</span> invoice <span class="hljs-keyword">import</span> LineItem, total_amount

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_total_happy_path</span>():</span>
    items = [LineItem(<span class="hljs-string">"A"</span>, <span class="hljs-number">10.00</span>, <span class="hljs-number">2</span>), LineItem(<span class="hljs-string">"B"</span>, <span class="hljs-number">5.00</span>, <span class="hljs-number">1</span>)]
    <span class="hljs-keyword">assert</span> total_amount(items, tax_rate=<span class="hljs-number">0.2</span>, discount_pct=<span class="hljs-number">10</span>) == <span class="hljs-number">27.00</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_empty_items_is_zero</span>():</span>
    <span class="hljs-keyword">assert</span> total_amount([], tax_rate=<span class="hljs-number">0.2</span>, discount_pct=<span class="hljs-number">10</span>) == <span class="hljs-number">0.00</span>

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_negative_quantity_is_rejected</span>():</span>
    items = [LineItem(<span class="hljs-string">"A"</span>, <span class="hljs-number">10.00</span>, <span class="hljs-number">-1</span>)]
    <span class="hljs-keyword">with</span> pytest.raises(ValueError):
        total_amount(items, tax_rate=<span class="hljs-number">0.2</span>, discount_pct=<span class="hljs-number">0</span>)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_discount_over_100_is_rejected</span>():</span>
    items = [LineItem(<span class="hljs-string">"A"</span>, <span class="hljs-number">10.00</span>, <span class="hljs-number">1</span>)]
    <span class="hljs-keyword">with</span> pytest.raises(ValueError):
        total_amount(items, tax_rate=<span class="hljs-number">0.2</span>, discount_pct=<span class="hljs-number">150</span>)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_negative_tax_rate_is_rejected</span>():</span>
    items = [LineItem(<span class="hljs-string">"A"</span>, <span class="hljs-number">10.00</span>, <span class="hljs-number">1</span>)]
    <span class="hljs-keyword">with</span> pytest.raises(ValueError):
        total_amount(items, tax_rate=<span class="hljs-number">-0.1</span>, discount_pct=<span class="hljs-number">0</span>)
</code></pre>
<p>If you run these tests against the original module, several will fail because the function never raises errors. That failure is good news: the tests are forcing you to decide what “correct” means.</p>
<p>Next, add a property-based test that checks invariants. For pricing, a simple invariant is that if all quantities and prices are non-negative, tax rate is non-negative, and discount is between 0 and 100, then the total should never be negative, and it should be rounded to two decimal places.</p>
<pre><code class="lang-python"><span class="hljs-comment"># test_invoice_properties.py</span>
<span class="hljs-keyword">from</span> decimal <span class="hljs-keyword">import</span> Decimal
<span class="hljs-keyword">from</span> hypothesis <span class="hljs-keyword">import</span> given, strategies <span class="hljs-keyword">as</span> st
<span class="hljs-keyword">from</span> invoice <span class="hljs-keyword">import</span> LineItem, total_amount

<span class="hljs-meta">@given(</span>
    prices=st.lists(st.decimals(min_value=<span class="hljs-number">0</span>, max_value=<span class="hljs-number">1000</span>, places=<span class="hljs-number">2</span>), min_size=<span class="hljs-number">0</span>, max_size=<span class="hljs-number">20</span>),
    qtys=st.lists(st.integers(min_value=<span class="hljs-number">0</span>, max_value=<span class="hljs-number">50</span>), min_size=<span class="hljs-number">0</span>, max_size=<span class="hljs-number">20</span>),
    tax=st.decimals(min_value=<span class="hljs-number">0</span>, max_value=<span class="hljs-number">1</span>, places=<span class="hljs-number">3</span>),
    disc=st.decimals(min_value=<span class="hljs-number">0</span>, max_value=<span class="hljs-number">100</span>, places=<span class="hljs-number">2</span>),
)
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_total_never_negative_and_two_decimals</span>(<span class="hljs-params">prices, qtys, tax, disc</span>):</span>
    n = min(len(prices), len(qtys))
    items = [LineItem(<span class="hljs-string">f"SKU<span class="hljs-subst">{i}</span>"</span>, float(prices[i]), int(qtys[i])) <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> range(n)]
    total = total_amount(items, tax_rate=float(tax), discount_pct=float(disc))
    <span class="hljs-keyword">assert</span> total &gt;= <span class="hljs-number">0</span>
    <span class="hljs-keyword">assert</span> round(total, <span class="hljs-number">2</span>) == total
</code></pre>
<p>This is exactly the kind of test that tends to uncover the “0.01 bug” and weird interactions you didn’t explicitly write. Hypothesis’s design is to generate many inputs and then reduce a failing case to a simpler one you can understand, which is especially helpful when AI-written logic fails in a way you didn’t anticipate.​</p>
<p>At this point, the right fix is to stop using floats for money and start validating inputs. Here is a more robust version using <code>Decimal</code> and explicit checks:</p>
<pre><code class="lang-python"><span class="hljs-comment"># invoice_fixed.py</span>
<span class="hljs-keyword">from</span> dataclasses <span class="hljs-keyword">import</span> dataclass
<span class="hljs-keyword">from</span> decimal <span class="hljs-keyword">import</span> Decimal, ROUND_HALF_UP
<span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> Iterable

TWOPLACES = Decimal(<span class="hljs-string">"0.01"</span>)

<span class="hljs-meta">@dataclass(frozen=True)</span>
<span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">LineItem</span>:</span>
    sku: str
    unit_price: Decimal
    qty: int

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">total_amount</span>(<span class="hljs-params">items: Iterable[LineItem], tax_rate: Decimal, discount_pct: Decimal</span>) -&gt; Decimal:</span>
    <span class="hljs-keyword">if</span> tax_rate &lt; <span class="hljs-number">0</span>:
        <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"tax_rate must be &gt;= 0"</span>)
    <span class="hljs-keyword">if</span> discount_pct &lt; <span class="hljs-number">0</span> <span class="hljs-keyword">or</span> discount_pct &gt; <span class="hljs-number">100</span>:
        <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"discount_pct must be between 0 and 100"</span>)

    subtotal = Decimal(<span class="hljs-string">"0"</span>)
    <span class="hljs-keyword">for</span> i <span class="hljs-keyword">in</span> items:
        <span class="hljs-keyword">if</span> i.qty &lt; <span class="hljs-number">0</span>:
            <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"qty must be &gt;= 0"</span>)
        <span class="hljs-keyword">if</span> i.unit_price &lt; <span class="hljs-number">0</span>:
            <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"unit_price must be &gt;= 0"</span>)
        subtotal += i.unit_price * Decimal(i.qty)

    discounted = subtotal * (Decimal(<span class="hljs-string">"1"</span>) - (discount_pct / Decimal(<span class="hljs-string">"100"</span>)))
    total = discounted * (Decimal(<span class="hljs-string">"1"</span>) + tax_rate)

    <span class="hljs-keyword">if</span> total &lt; <span class="hljs-number">0</span>:
        <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"total cannot be negative"</span>)

    <span class="hljs-keyword">return</span> total.quantize(TWOPLACES, rounding=ROUND_HALF_UP)
</code></pre>
<p>This version is less “cute” than the AI-generated one, but it communicates intent, fails loudly on invalid data, and avoids floating-point surprises. Your earlier tests can be adapted to assert <code>Decimal</code> values, and now they become a guardrail: if someone later “simplifies” the code back to floats, the suite will catch it.</p>
<h2 id="heading-testing-products-that-include-llm-features">Testing products that include LLM features</h2>
<p>If your AI-written product includes LLM calls, you need another layer of testing beyond normal software correctness: you must test behavior across prompts, jailbreak attempts, and changing model behavior. OWASP’s Top 10 for LLM Applications is a helpful vocabulary for what to test because it names concrete risk categories that show up in real systems, such as prompt injection and sensitive information disclosure.</p>
<p>In practice, that means you should write tests that simulate malicious or messy user input and verify the system still behaves safely. For example, you might test that the assistant refuses to reveal secrets from tool outputs, that it does not follow instructions embedded in retrieved documents, and that its outputs are constrained to safe schemas when they are later executed by downstream code.</p>
<p>You also want repeatable evaluation, not just ad-hoc prompting in a chat window. OpenAI’s open-source <code>evals</code> repository describes itself as a framework for evaluating LLMs and LLM systems and includes an open-source registry of benchmarks, which reflects the general idea: you should treat LLM behavior as something you continuously measure with a harness you can rerun.</p>
<h2 id="heading-full-drawbacks-and-trade-offs-you-should-expect">Full drawbacks and trade-offs you should expect</h2>
<p>The first drawback is that good testing takes time, and AI-generated code can create a false sense of speed. You can generate features quickly, but if you do not invest in tests, you often pay the time back later with interest: debugging production incidents, handling support, and untangling brittle logic.</p>
<p>The second drawback is brittleness, especially when your product includes LLM prompts. Prompt-based behavior can shift due to model updates, temperature settings, or small prompt edits, so you must design tests around stable expectations like schema compliance, refusal behavior, and invariant guarantees rather than expecting identical wording every run.</p>
<p>The third drawback is that a test suite can give you false confidence if it only covers the examples you already believe. AI-written code tends to fail in the spaces you didn’t imagine, so you need boundary tests, property-based tests, and fuzzing to explore the input space more aggressively; Hypothesis is explicitly oriented toward finding edge cases you would not have written by hand, which is why it’s so useful in this context.</p>
<p>The fourth drawback is security overhead. If the product touches user input, files, networks, or credentials, you need to budget time for security scanning, dependency review, and abuse-case testing, and if LLMs are involved you should explicitly test the OWASP-style risk categories rather than hoping normal unit tests will cover them.</p>
<p>The fifth drawback is operational: even strong tests won’t cover everything that happens under real traffic. This is why teams that take reliability seriously add monitoring, alerting, and controlled rollouts, and why continuous approaches like OSS-Fuzz exist in the broader ecosystem as a model of “keep testing as the code changes,” not “test once before release.”</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Testing AI-generated software requires treating the code as "helpful but untrusted" and establishing a continuous testing pipeline to ensure correctness, safety, and stability over time. AI-generated code often misses crucial details like input validation and error handling, necessitating ongoing oversight and a robust test strategy. Starting with a clear spec and incorporating threat modeling, static checks, unit tests, property-based tests, and fuzzing can safeguard against potential failures. Integration tests verify real-world compatibility, and regression tests prevent re-emerging issues. For code involving LLMs, additional testing for prompt security and behavior consistency is essential. While thorough testing demands time, it ultimately saves costs by preventing production issues and ensuring software reliability.</p>
]]></content:encoded></item><item><title><![CDATA[The Twelve-Factor App: Principles for Cloud-Native Services]]></title><description><![CDATA[The Twelve-Factor App is a methodology for building modern software-as-a-service applications that are easy to deploy, scale, and maintain. It was developed by engineers at Heroku and encapsulates twelve best practices covering code management, confi...]]></description><link>https://articles.eminmuhammadi.com/the-twelve-factor-app-principles-for-cloud-native-services</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/the-twelve-factor-app-principles-for-cloud-native-services</guid><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[AWS]]></category><category><![CDATA[google cloud]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Wed, 05 Nov 2025 20:28:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762374434315/cea655cc-c25f-400e-b521-861ee5f3230e.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Twelve-Factor App is a methodology for building modern software-as-a-service applications that are easy to deploy, scale, and maintain. It was developed by engineers at Heroku and encapsulates twelve best practices covering code management, configuration, deployment, and operations. These principles address common cloud-era challenges – from ensuring consistent environments to making apps portable and resilient.</p>
<h2 id="heading-codebase">Codebase</h2>
<p>At the heart of the first factor is the rule that an app should have exactly one codebase, tracked in version control, with many deploys. In other words, your entire application lives in a single Git repository. Each running instance of the app – whether a developer’s local copy, a staging server, or production – is a deploy of that same code. There should never be multiple, divergent codebases for one service. For example, a startup team might host their service in one GitHub repository and use branches or tags to deploy the same code to different environments. This ensures that what runs in production is traceable to the same source code under version control. This one-to-many relationship avoids the confusion of “works on my machine” by making every environment use the identical code (perhaps at different versions).</p>
<h2 id="heading-dependencies">Dependencies</h2>
<p>A Twelve-Factor app explicitly declares and isolates all its dependencies. It does not assume that any library or system package exists by default. Instead, the app uses a manifest (such as <code>requirements.txt</code> or <code>package.json</code>) to list all libraries it needs, and uses tooling (virtual environments, containers, bundlers) to isolate them. For example, a Node.js application will list modules in <code>package.json</code> and install them fresh in each environment using <code>npm ci</code>. Similarly, a Python team might use <code>pip install -r requirements.txt</code> inside a virtualenv or Docker image. Explicit dependency management means a new developer can clone the repo and run one command to get everything installed, and the same setup works in production. In practice, a company might use Docker to containerize its app: the Dockerfile serves as the dependency declaration and build context, so that each release includes only what it needs.</p>
<h2 id="heading-config">Config</h2>
<p>Configuration refers to all the settings that vary between deploys, such as database URLs, API keys, and other credentials. The Twelve-Factor principle is to keep config out of the code and store it in the environment. In practice, this means using environment variables (or other external config stores) rather than hard-coding any environment-specific values. For example, a team might set <code>DATABASE_URL</code> and <code>REDIS_URL</code> in each environment: local development, staging, and production each have their own values. This way the same codebase can be open-sourced without leaking secrets, and the app can easily connect to different resources by changing env vars.</p>
<h2 id="heading-backing-services">Backing Services</h2>
<p>Backing services are any networked services the app uses, such as databases, message queues, or caching systems. The Twelve-Factor rule is to treat backing services as attached resources. Whether a database is local or third-party should not matter to the app: it simply connects to whatever URL is given. For example, an application might be developed using a local MySQL database during development, but in production switch to Amazon RDS by changing the <code>DATABASE_URL</code> in the environment. No code changes are needed – only the resource handle in the configuration changes. This loose coupling means a team can replace or upgrade backing services on the fly.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762374066037/5f400f45-7b2c-4738-af17-1d3cf4d87bb8.png" alt="Diagram showing a &quot;Cloud Native System&quot; in the center, connected by dashed arrows to various &quot;Backing Services&quot;: Monitoring, Security Services, Analytics, Distributed Caching, Message Brokers, Relational Databases, Document Databases, Streaming Services, and Storage Services." class="image--center mx-auto" /></p>
<h2 id="heading-build-release-run">Build, Release, Run</h2>
<p>Deployment is split into three distinct stages: build, release, and run. The build stage converts the codebase into an executable bundle (for example, building a Docker image or compiling assets). The release stage takes that build and combines it with the configuration for a given deploy, producing a specific release. Finally, the run stage actually launches the app using that release. For example, a CI/CD pipeline might trigger a build job that produces a Docker image, inject configuration variables, and deploy containers on a Kubernetes cluster. The key is that these stages never intermingle: once in the run stage, the app should be immutable and cannot mutate its own build. This separation makes rollbacks and debugging straightforward.</p>
<h2 id="heading-processes">Processes</h2>
<p>A Twelve-Factor app is executed as one or more stateless processes. Each process handles a slice of work, such as one process type handling web requests and another handling background jobs. Critically, processes are share-nothing and do not rely on local memory or filesystem state. Any data that needs to persist must go into a backing service (like a database or cache). For example, a web service might store session data in Redis instead of server memory so that multiple instances can serve user requests interchangeably. This statelessness enables horizontal scaling and resilience.</p>
<h2 id="heading-port-binding">Port Binding</h2>
<p>The app is self-contained and exports services by binding to a port. Instead of relying on an external web server or application container, the app includes its own HTTP server and listens on a specified port. For example, a Node.js Express app might call <code>app.listen(process.env.PORT)</code> to accept HTTP requests directly, rather than running inside Apache or Tomcat. This makes each app an independent service that can be composed into larger systems via simple URLs or ports.</p>
<h2 id="heading-concurrency">Concurrency</h2>
<p>Twelve-Factor apps scale by decomposing into process types and running more of each as needed. Each distinct kind of work is handled by a “process type” – for example, <code>web</code> for HTTP requests and <code>worker</code> for background jobs. The app can run multiple instances of each process type. For example, if traffic grows, a team might start three web processes and two worker processes across several containers. This horizontal scaling of identical, share-nothing processes makes scaling straightforward.</p>
<h2 id="heading-disposability">Disposability</h2>
<p>Processes should be robust and disposable, with fast startup and graceful shutdown. In practice, this means designing your app so it can be started or stopped quickly in response to scaling or deployment events. Ideally, a process should reach readiness within seconds, and on shutdown, it should finish in-flight requests or jobs before exiting cleanly. This design enables autoscaling, rolling deployments, and resilient systems that can recover gracefully from failures.</p>
<h2 id="heading-devprod-parity">Dev/Prod Parity</h2>
<p>Keeping development, staging, and production as similar as possible is crucial for continuous delivery. The goal is to minimize the gaps in time, personnel, and tools. For example, a team might use Docker so developers run the same OS, database version, and language runtime locally as in production. They avoid using different stacks across environments because even small differences can cause production-only bugs. By maintaining parity, developers can ship changes more frequently and with fewer surprises.</p>
<h2 id="heading-logs">Logs</h2>
<p>Treat logs as event streams, not as files to manage. Twelve-Factor apps do not write logs to disk or attempt to rotate them; instead, each process writes its output (stdout/stderr) to the console. In production, the execution environment captures and aggregates all streams from all processes and routes them to log management systems such as Elasticsearch, Splunk, or cloud log services. This approach keeps applications simple and observability centralized.</p>
<h2 id="heading-admin-processes">Admin Processes</h2>
<p>Any one-off administrative tasks, such as database migrations or scripts, should be executed in an identical environment to the app’s regular processes. This ensures consistency and avoids “works on my machine” issues. For example, in Kubernetes, an operator might run a database migration using the same container image used by the application itself. By treating admin processes as first-class citizens, maintenance and ad-hoc operations happen under the same rules as normal runs, keeping the system predictable.</p>
<p><strong>Source:</strong> <a target="_blank" href="https://12factor.net/">https://12factor.net/</a></p>
]]></content:encoded></item><item><title><![CDATA[QA Engineer Roadmap]]></title><description><![CDATA[A QA Engineer Roadmap is the guide many aspiring testers wish they had from day one. Whether you’re new to software testing or aiming to level up toward automation, this roadmap will show you the path. In this article, you'll get a clear, structured ...]]></description><link>https://articles.eminmuhammadi.com/qa-engineer-roadmap</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/qa-engineer-roadmap</guid><category><![CDATA[QA engineer]]></category><category><![CDATA[Roadmap]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Manual Testing]]></category><category><![CDATA[test-automation]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Thu, 09 Oct 2025 17:41:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1760031547880/3725a25e-f57b-4fb1-99f8-849a9cbd4716.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>A QA Engineer Roadmap is the guide many aspiring testers wish they had from day one. Whether you’re new to software testing or aiming to level up toward automation, this roadmap will show you the path. In this article, you'll get a clear, structured progression from fundamentals through advanced practices, understand key skills, tackle common pitfalls, and gain actionable advice on building your QA career. By the end, you’ll have a complete blueprint for how to grow from a novice tester into a confident QA engineer.</p>
<h3 id="heading-understanding-the-qa-engineer-roadmap-amp-core-milestones">Understanding the QA Engineer Roadmap &amp; Core Milestones</h3>
<p>When you hear “QA Engineer Roadmap,” think of it as a learning map—a sequence of phases or milestones that take you from basic testing knowledge to advanced, real-world QA skills. The core milestones often include <strong>foundations of testing</strong>, <strong>automation</strong>, <strong>performance and non-functional testing</strong>, <strong>security testing</strong>, and <strong>continuous integration / DevOps alignment</strong> (CI/CD, containers, pipelines). A good roadmap pinpoints which skills or tools to pick up at each phase, how they build on each other, and roughly when to move forward.</p>
<p>One common segment you’ll see is the transition from <strong>manual testing</strong> to <strong>automation testing</strong>. Manual testing ensures you understand fundamental concepts—bug reporting, test planning, test case design, black-box / white-box techniques. Once those concepts are solid, the roadmap suggests learning automation tools (like Selenium, Playwright, or Cypress) and APIs. Beyond automation lies performance testing (JMeter, load testing), security testing (OWASP, vulnerability scanning), and then integrating QA practices into DevOps pipelines. Each step leverages what you’ve learned before.</p>
<p>Roadmap sources show this structure in their QA roadmap: starting with test oracles, testing approaches, then manual testing and eventually automation, performance, security, and integration with DevOps practices. Using a well-designed QA roadmap ensures you don’t skip essential foundations or jump too early to tools you’re not ready for.</p>
<h3 id="heading-from-manual-to-automation-a-detailed-roadmap">From Manual to Automation: A Detailed Roadmap</h3>
<p>One of the biggest inflection points in the QA Engineer Roadmap is the shift from manual testing to automation testing. According to multiple career guides, mastering this transition increases your employability and potential salary.</p>
<p>First, embed strong manual testing skills: writing clear bug reports, exploring edge cases, understanding user flows, and grasping the software development lifecycle (SDLC). After that, you should build your programming basics (often in languages like Python, JavaScript, or Java) so you can script automated tests. Many QA roadmaps advise you to pick one automation framework (e.g. Selenium, Cypress, Playwright) as your first automation tool and begin automating UI or API test cases.</p>
<p>Once automation basics are in place, you should learn test architecture: how to build maintainable test suites, use the Page Object Model, parallel execution, test data management, and integrate automated tests into a CI/CD pipeline (Jenkins, GitLab CI, GitHub Actions, etc.). At that stage, branching into performance / load testing (JMeter, k6), security scanning tools, and monitoring becomes natural.</p>
<p>Industry data suggests testers who learn automation tend to progress faster: job listings for “QA engineer” roles increasingly require automation experience, and many modern development teams expect testers to pull, push, and understand pipelines as part of DevOps workflows.</p>
<h3 id="heading-career-path-skills-amp-certifications-in-qa-engineering">Career Path, Skills &amp; Certifications in QA Engineering</h3>
<p>As you move along your QA Engineer Roadmap, you’ll want to align your skill development with career expectations. Many QA professionals follow a career path that begins as a <strong>Junior QA / Test Analyst</strong>, then advances to <strong>QA Engineer / Automation Engineer</strong>, and eventually to <strong>Senior QA Engineer, QA Lead, Test Architect</strong>, or even <strong>QA Manager</strong>. The roadmap should mirror these transitions: foundational skills in early years, then automation and technical proficiency, then leadership, architecture, and strategic roles.</p>
<p>Certifications play a role in many QA roadmaps. The <strong>ISTQB (International Software Testing Qualifications Board)</strong> offers a tiered structure: Foundation, Advanced, Specialist. Aligning your certification steps with the roadmap can help legitimize your skills.</p>
<p>In terms of skills, besides technical automation tools, you should cultivate soft skills: communication (reporting issues clearly), collaboration (working with developers, product, ops), understanding domain/business logic, and critical thinking (anticipating edge cases). Some roadmaps also mention <strong>non-functional testing</strong> (performance, load, security) and <strong>testing in CI/CD / DevOps contexts</strong> as essential differentiators for mid/advanced levels.</p>
<h3 id="heading-common-challenges-questions-amp-how-to-overcome-them">Common Challenges, Questions &amp; How to Overcome Them</h3>
<p>Many learners following a QA Engineer Roadmap encounter repeated concerns: “How long will this take?”, “Do I need to know programming to start?”, “Which automation tool should I pick first?”, “What about job experience without internships?”</p>
<p>Timeframes vary: many learners take 6–12 months to move from zero to entry level QA roles (manual + basic automation), and another 6–24 months to master intermediate/advanced automation and integrations. If you lack a formal tech background, you can begin with manual testing and gradually pick up programming.</p>
<p>Choosing tools can be confusing. A common strategy is to pick one mainstream automation framework (e.g. Selenium WebDriver or Cypress) and stick with it until you're comfortable, then explore others. Remember that the concepts (test design, maintainability, automation architecture) often transfer between tools.</p>
<p>For job experience, build a portfolio: open source testing contributions, side projects, mock test suites, bug bounty tasks or volunteering your testing skills in small apps. In interviews, emphasize your understanding of testing fundamentals, your roadmap and progression, and any automation you have done—even in toy projects.</p>
<h3 id="heading-tips-amp-resources-to-stay-updated-on-qa-trends">Tips &amp; Resources to Stay Updated on QA Trends</h3>
<p>Once your QA Engineer Roadmap is in motion, staying current is crucial. The testing landscape evolves: new frameworks, CI/CD tools, AI in testing, containerized testing, cloud test environments, etc. Regularly read QA / testing blogs, follow communities (StackOverflow, Reddit /r/QualityAssurance, testing Slack/Discord groups), and subscribe to podcasts.</p>
<p>Another tip: replicate parts of your roadmap in mini-projects. For example: build an end-to-end test suite for a demo app, integrate regression tests in CI, simulate load testing, try a security scan tool. Practical hands-on builds reinforce theory.</p>
<p>When you encounter new tools or frameworks, map them back to your master QA Engineer Roadmap to decide when to adopt them, rather than chasing every new trend immediately.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Creating a QA Engineer Roadmap helps you plan your journey with structure and confidence. Start with testing basics, move to automation, then learn about performance and security, and finally, integrate into DevOps pipelines. Each step builds your skills for real jobs. You've also learned how career paths, certifications, common challenges, and staying updated are important for growth.</p>
<p>Now, take action: decide where you are on the roadmap, choose one tool or skill to master next, and start working on your plan. Save this article as your guide, and revisit it every year to update your roadmap with new trends and your changing goals.</p>
<p>If you need help creating a QA roadmap for your experience, tools, or job market, just let me know. I'd be happy to help you refine it.</p>
]]></content:encoded></item><item><title><![CDATA[Aligning Business with Technology]]></title><description><![CDATA[In large enterprises, software developers and IT staff often operate in a very different day‑to‑day world from business or management professionals. Engineers tend to focus on technical excellence, specialized tools, and code quality, whereas busines...]]></description><link>https://articles.eminmuhammadi.com/building-a-collaborative-corporate-culture-between-business-and-tech-teams</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/building-a-collaborative-corporate-culture-between-business-and-tech-teams</guid><category><![CDATA[software development]]></category><category><![CDATA[agile development]]></category><category><![CDATA[Scrum]]></category><category><![CDATA[corporate]]></category><category><![CDATA[Business growth ]]></category><category><![CDATA[hybrid work]]></category><category><![CDATA[cross-functional teams ]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Sun, 14 Sep 2025 11:45:43 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757850238315/3f638593-67fc-46a4-86e5-0980407573e9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In large enterprises, software developers and IT staff often operate in a very different day‑to‑day world from business or management professionals. Engineers tend to focus on technical excellence, specialized tools, and code quality, whereas business teams prioritize market needs, product features, and customer experience. These differing priorities can lead to <strong>misaligned goals and communication breakdowns</strong>: for example, developers may emphasize performance and stability while product managers push for new features to meet business targets.</p>
<p>Such conflicts are common and can stall projects; as one expert notes, teams using different jargon or incentives often struggle to work toward shared objectives. In practice, misunderstandings happen when each group expects the other to adjust, leading to delays. To overcome this, organizations need to foster mutual understanding. Leaders must clarify how every role contributes to the company’s mission and create a <strong>shared sense of ownership</strong> so that engineers and business colleagues feel equally responsible for outcomes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757849442974/0b881127-dd81-46e5-b271-1d600bd94073.jpeg" alt="Unified teamwork – building trust and shared goals between technical and non-technical staff." class="image--center mx-auto" /></p>
<p>As one industry analysis observes, cross-team collaboration often fails without intentional culture change. For example, mixed agile “squads” or project teams that include both IT and business members can break down silos. In a European bank case study, IT and commercial staff were co‑located in small squads, <em>“constantly testing what they might offer our customers”</em> in an environment with <strong>no rigid handoffs or managers controlling collaboration</strong>. This “end-to-end” team structure – where software engineers sit in the same space as product and marketing colleagues – was crucial to ING Netherlands’ agile transformation. It eliminated the departmental handovers that traditionally slowed projects.</p>
<p>In other words, embedding technologists and business people in the same teams helped align everyone on a <em>common definition of success</em>. Deloitte likewise reports that some banks are shifting toward <strong>cross-functional teams</strong> with IT, product, and business unit members working together, which “enhance[s] collaboration” and spreads agility throughout the process. These cross-disciplinary squads unite diverse expertise and make communication more fluid, reducing the “drag” caused by siloed planning.</p>
<p>In the corporate world, roles are usually more specialized than at a startup. Large companies often <strong>assign engineers very specific tasks</strong> within a rigid structure. One recruiter notes that in a big company, “you will usually work in a dedicated team and receive tasks according to your skillset,” allowing you to master a niche but also requiring you to interact through formal chains of command. This setup can be good for developing deep expertise, but it might isolate teams. Having engineers work closely with business partners helps solve this. For example, the image below shows a developer focused on coding, which is common in corporate projects. However, even in such settings, it's important to link technical work to the overall strategy.</p>
<p>When engineers understand the market goals behind their backlog, they can prioritize the right features and innovate within constraints. Modern corporate tech cultures try to balance this by holding joint planning sessions, rotating team members, and pairing engineers with product owners. In practice, emphasizing a <strong>“single vision”</strong> (e.g. a unified product roadmap) helps each team understand how its work fits into company objectives.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1757849723359/03526abe-2f05-4077-8f5b-c9075c5875a5.jpeg" alt="software engineer in a focused development environment. In structured corporate settings, engineers often have specialized roles but still benefit from linking technical work to business goals" class="image--center mx-auto" /></p>
<p>Current workplace trends reinforce these collaborative approaches. <strong>Hybrid and flexible work models</strong> are now widespread across Europe, changing how teams interact. A <a target="_blank" href="https://explore.zoom.us/media/reinventing-teams-hybrid-world-idc.pdf">2023 report</a> found that 72% of European companies use hybrid work arrangements and see bottom-line benefits like higher profitability and efficiency. At the same time, 93% of employees in Europe rate flexibility as very important.</p>
<p>For software teams, this often means code and meetings occur partly online. To keep tech and business workers connected in a hybrid setting, it's important to have clear ways to communicate and shared routines, like regular video stand-ups or joint digital whiteboard sessions. Agile and DevOps practices support this by emphasizing frequent feedback and transparency. Indeed, many firms now adopt agile methodologies (Scrum, Kanban, etc.) so that everyone can iterate quickly and stay aligned. Banks, in particular, have begun setting up continuous delivery pipelines and product‑focused squads. Instead of one big waterfall project, teams iterate on small releases every few weeks.</p>
<p>This product-led approach makes it easier for business stakeholders to review work regularly and adjust priorities. It also encourages engineers and business staff to engage in the same sprint reviews or backlog grooming sessions, building a shared language.</p>
<p>Across Europe (including the South Caucasus region), governments and companies are also pushing digital transformation. A recent strategy document in one country of the Caucasus underscores this shift: it proposes <strong>collaborative, flexible decision-making</strong> among businesses, civil society and public institutions to drive innovation. This reflects a broader trend of valuing cross‑sector collaboration. As those initiatives highlight, open communication and adaptability are as important as the technology itself. In practice, multinational enterprises often run cross-border innovation hubs or joint tech-business training programs to cultivate this mindset. Similarly, corporate culture efforts now often include “reverse mentoring” (business leaders learning tech topics and vice versa) and cross-training (rotating finance people into IT or developers into customer support for short periods). These practices help build empathy and reduce the “us vs them” mentality.</p>
<p>Ultimately, a healthy corporate culture for mixed technical and business teams relies on <strong>clarity and respect</strong>. Leaders must clearly communicate strategy and involve both sides in planning. Engineers should be encouraged to explain technical trade-offs in plain language, and business folks should strive to understand at least the basics of the development process. Establishing shared metrics (like customer satisfaction or deployment frequency) ensures everyone moves toward the same targets. Trust grows when teams see that each discipline brings valuable expertise: for example, including engineers in customer demos or having marketers attend sprint retrospectives can break down stereotypes. When done well, these efforts foster a sense of joint ownership and belonging, which in turn boosts engagement and performance.</p>
<p>In summary, combining software engineers with business teams under one corporate roof presents challenges of jargon, misaligned incentives, and traditional silos. To address these, many European organizations are embracing agile, hybrid work, and cross-functional structures. They align everyone on shared goals, promote open communication, and integrate roles so that developers and business professionals truly work <em>with</em> each other rather than <em>alongside</em> each other. By nurturing this collaborative culture, enterprises can turn the diversity of skills into a strategic advantage, accelerating innovation while meeting business objectives.</p>
<h3 id="heading-references-amp-resources">References &amp; Resources</h3>
<ol>
<li><p><a target="_blank" href="https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/how-to-build-a-culture-of-collaboration-between-business-and-technology"><strong>McKinsey &amp; Company</strong> — <em>“How to build a culture of collaboration between business and technology”</em></a></p>
</li>
<li><p><a target="_blank" href="https://www2.deloitte.com/insights/us/en/industry/financial-services/future-of-banking-technology.html">Deloitte — “The future of banking: Bringing IT and business closer together”</a></p>
</li>
<li><p><a target="_blank" href="https://www.forbes.com/advisor/business/working-at-a-startup-vs-a-big-company/">Forbes — “What it’s like to work at a big tech company vs. a startup”</a></p>
</li>
<li><p><a target="_blank" href="https://www.mckinsey.com/business-functions/people-and-organizational-performance/our-insights/how-ing-transformed-an-entire-bank"><strong>ING</strong> — <em>Agile transformation case study: How ING adopted cross-functional squads</em></a></p>
</li>
<li><p><a target="_blank" href="https://www.ey.com/en_gl/workforce/hybrid-working-in-europe"><strong>EY</strong> — <em>“Hybrid work in Europe: Benefits and challenges”</em> (EY 2023 European workforce study)</a></p>
</li>
<li><p><a target="_blank" href="https://documents.worldbank.org/en/publication/documents-reports/documentdetail"><strong>World Bank</strong> — <em>“Digital Transformation Strategy for the South Caucasus”</em> (policy insights on collaboration and innovation)</a></p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Chinese Remainder Theorem Simplified]]></title><description><![CDATA[The Chinese Remainder Theorem (CRT) sounds fancy but it is one of the most concrete and useful results in elementary number theory: it explains how a number can be completely described by its remainders when divided by several pairwise-coprime moduli...]]></description><link>https://articles.eminmuhammadi.com/chinese-remainder-theorem-simplified</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/chinese-remainder-theorem-simplified</guid><category><![CDATA[Cryptography]]></category><category><![CDATA[RSA]]></category><category><![CDATA[step-by-step guide]]></category><category><![CDATA[Mathematics]]></category><category><![CDATA[information-theory]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Thu, 11 Sep 2025 19:20:03 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1757618388201/7e1a1561-7da0-43a4-ac7c-c1fda8e0e9e9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The Chinese Remainder Theorem (CRT) sounds fancy but it is one of the most concrete and useful results in elementary number theory: it explains how a number can be completely described by its remainders when divided by several <a target="_blank" href="https://en.wikipedia.org/wiki/Coprime_integers">pairwise-coprime</a> moduli, and it gives an explicit, efficient recipe to reconstruct that number.</p>
<p>Imagine the problem like this: someone hides a number and gives you a few clues — its remainder when divided by 3, by 5, by 7, and so on. If the numbers <code>(3, 5, 7, ...)</code> are pairwise coprime (no two have a common factor greater than 1), these clues together uniquely determine the hidden number, except for adding multiples of the product of these numbers.</p>
<p>Giving the remainders for each factor is the same as giving the remainder for the product. This idea is not only neat but also very useful. It lets you break down big modular calculations into smaller, easier, and faster ones, which can be done in parallel. Then, you can combine the results using a simple, low-cost method.</p>
<p>Understanding CRT begins with the intuition of “matching residues” and ends with a handful of reliable computational tools — the product <code>N</code> of the moduli, the partial products <code>Nᵢ = N / nᵢ</code>, and the modular inverses of those partial products modulo each corresponding modulus — and once you grasp why those pieces fit together, everything else (examples, edge cases, optimizations) becomes straightforward.</p>
<p>To see the theorem in its simplest form, consider two coprime integers <code>p</code> and <code>q</code>. The two-modulus CRT states: for any pair of residues <code>a (mod p)</code> and <code>b (mod q)</code> there exists a unique <code>x</code> modulo <code>N = p·q</code> such that <code>x ≡ a (mod p)</code> and <code>x ≡ b (mod q)</code>.</p>
<p>The constructive proof gives us the algorithm: compute <code>N = p·q</code>, set <code>N₁ = N/p</code> and <code>N₂ = N/q</code>; find integers <code>y₁</code> and <code>y₂</code> so that <code>N₁·y₁ ≡ 1 (mod p)</code> and <code>N₂·y₂ ≡ 1 (mod q)</code> — these y’s are modular inverses and can be computed efficiently using the extended Euclidean algorithm — and then the number <code>x = a·N₁·y₁ + b·N₂·y₂</code> (reduced modulo N) satisfies both <a target="_blank" href="https://en.wikipedia.org/wiki/Modular_arithmetic">congruences</a>.</p>
<p><code>N₁</code> is divisible by <code>q</code> and so contributes <code>0 (mod q)</code> while <code>N₁·y₁ ≡ 1 (mod p)</code> gives the needed <code>a (mod p)</code>; symmetrically <code>N₂</code> contributes <code>0 (mod p)</code> and <code>1 (mod q)</code>. Uniqueness follows because if two numbers agree modulo both <code>p</code> and <code>q</code> then their difference is divisible by both <code>p</code> and <code>q</code>; since <code>p</code> and <code>q</code> are coprime their product divides the difference, so they are the same modulo <code>N</code>. This same construction extends to any finite list of pairwise-coprime moduli <code>n₁, n₂, …, n_k</code>: compute <code>N = ∏ nᵢ</code>, compute <code>Nᵢ = N / nᵢ</code> for every <code>i</code>, compute <code>yᵢ = (Nᵢ)^{-1} mod nᵢ</code>, and set <code>x ≡ Σ aᵢ·Nᵢ·yᵢ (mod N)</code>. The extended Euclidean algorithm gives every modular inverse efficiently; conceptually you only need to know that an inverse exists because <code>gcd(Nᵢ, nᵢ) = 1</code> when the moduli are pairwise coprime.</p>
<p>To make this concrete, walk through examples carefully and check arithmetic as you go: solve <code>x ≡ 2 (mod 3)</code> and <code>x ≡ 3 (mod 5)</code>. Here <code>N = 15</code>, <code>N₁ = 5</code>, <code>N₂ = 3</code>. Find <code>y₁</code> with <code>5·y₁ ≡ 1 (mod 3)</code>: since <code>5 ≡ 2 (mod 3)</code> and <code>2·2 ≡ 1 (mod 3)</code>, <code>y₁ = 2</code>. Find <code>y₂</code> with <code>3·y₂ ≡ 1 (mod 5): 3·2 = 6 ≡ 1 (mod 5)</code>, so <code>y₂ = 2</code>. Then <code>x = 2·5·2 + 3·3·2 = 20 + 18 = 38 ≡ 8 (mod 15)</code>, and indeed <code>8 ≡ 2 (mod 3) and 8 ≡ 3 (mod 5)</code>.</p>
<p>The CRT construction is mechanical and scales: for three or more moduli you do the same for each index and sum all contributions, reducing modulo <code>N</code> at the end (or along the way to keep numbers small). A useful operational detail for programmers is always to reduce <code>aᵢ</code> modulo <code>nᵢ</code> before combining, and to compute the <code>Nᵢ</code> and <code>yᵢ</code> once when you perform many reconstructions (precompute and cache them). For extremely large moduli use big-integer libraries; for repeated conversions between the direct-product (CRT) representation and the canonical single-modulus representation the little precomputation (<code>Nᵢ</code> and <code>yᵢ</code>) reduces every conversion to k multiplications and one modular reduction, which is fast.</p>
<p>Because the CRT is constructive, it turns into straightforward, reliable code. The modular inverse function is implemented with the extended <a target="_blank" href="https://en.wikipedia.org/wiki/Euclidean_algorithm">Euclidean algorithm</a>; once you have modinv you implement CRT by multiplying out <code>N</code> and combining the <code>aᵢ·Nᵢ·modinv(Nᵢ, nᵢ)</code> terms. Here is a compact, safe Python implementation you can drop into a utility module:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">extended_gcd</span>(<span class="hljs-params">a, b</span>):</span>
    <span class="hljs-keyword">if</span> b == <span class="hljs-number">0</span>:
        <span class="hljs-keyword">return</span> (<span class="hljs-number">1</span>, <span class="hljs-number">0</span>, a)
    x1, y1, g = extended_gcd(b, a % b)
    <span class="hljs-keyword">return</span> (y1, x1 - (a // b) * y1, g)

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">modinv</span>(<span class="hljs-params">a, m</span>):</span>
    x, y, g = extended_gcd(a, m)
    <span class="hljs-keyword">if</span> g != <span class="hljs-number">1</span>:
        <span class="hljs-keyword">raise</span> ValueError(<span class="hljs-string">"Inverse does not exist (not coprime)"</span>)
    <span class="hljs-keyword">return</span> x % m

<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">crt</span>(<span class="hljs-params">pairs</span>):</span>
    <span class="hljs-string">"""
    pairs: list of (ai, ni) with ni pairwise coprime.
    Returns (x, N) where x is the smallest nonnegative solution modulo N = product(ni).
    """</span>
    N = <span class="hljs-number">1</span>
    <span class="hljs-keyword">for</span> _, n <span class="hljs-keyword">in</span> pairs:
        N *= n
    total = <span class="hljs-number">0</span>
    <span class="hljs-keyword">for</span> a, n <span class="hljs-keyword">in</span> pairs:
        a = a % n
        Ni = N // n
        yi = modinv(Ni, n)
        total += a * Ni * yi
    <span class="hljs-keyword">return</span> (total % N, N)
</code></pre>
<p>This code is robust for educational and moderate production uses; for cryptographic workloads prefer vetted libraries and big-int implementations that include <a target="_blank" href="https://en.wikipedia.org/wiki/Timing_attack">timing-attack</a> mitigations.</p>
<p>In conclusion, the Chinese Remainder Theorem is a concrete, practical bridge between theory and computation: it tells you that a number modulo a large product can be represented exactly by its residues modulo each pairwise-coprime factor, and it gives a simple, deterministic recipe to reconstruct that number using partial products and modular inverses. This means CRT turns intimidating large-modulus problems into many small, independent tasks that are easier to reason about, implement, and even parallelize; for practitioners it offers real speedups (notably in RSA private-key operations) and a convenient representation that can be precomputed and reused when you convert many times.</p>
]]></content:encoded></item><item><title><![CDATA[Randomness in Cryptography: True Random Sources, Entropy, and Pseudorandom Generators Explained]]></title><description><![CDATA[Randomness is a word you hear in everyday life and in headlines about security breaches, but it hides subtle technical meaning. In cryptography, randomness is the backbone of secret keys, tokens, and nonces; when randomness fails, systems fail.
What ...]]></description><link>https://articles.eminmuhammadi.com/randomness-in-cryptography-true-random-sources-entropy-and-pseudorandom-generators-explained</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/randomness-in-cryptography-true-random-sources-entropy-and-pseudorandom-generators-explained</guid><category><![CDATA[lavarand]]></category><category><![CDATA[Cryptography]]></category><category><![CDATA[random numbers]]></category><category><![CDATA[rng]]></category><category><![CDATA[pseudo-random]]></category><category><![CDATA[Entropy]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Mon, 18 Aug 2025 11:50:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1755517685759/27b760ee-59cd-4733-8190-d40746e807f6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Randomness is a word you hear in everyday life and in headlines about security breaches, but it hides subtle technical meaning. In cryptography, randomness is the backbone of secret keys, tokens, and nonces; when randomness fails, systems fail.</p>
<h3 id="heading-what-random-really-means">What “random” really means</h3>
<p>In everyday terms, randomness is unpredictability. A coin toss feels random because you cannot reliably predict whether it will land heads or tails. Technically, <a target="_blank" href="https://en.wikipedia.org/wiki/Randomness">randomness</a> combines unpredictability with lack of bias. A truly random source gives outcomes that cannot be predicted better than by chance and that follow expected frequencies. If you can forecast future outputs better than chance, the source is not random in the cryptographic sense.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755515689739/c1369313-0d47-48bc-a39f-a727f21c22f6.webp" alt="A hand rolling two twenty-sided dice on a wooden surface." class="image--center mx-auto" /></p>
<p>To make these ideas precise, cryptographers use the mathematical notion of entropy, which quantifies how much uncertainty or “surprise” a source produces on average. Formally, for a discrete random variable X that takes values x with probabilities p(x), <a target="_blank" href="https://en.wikipedia.org/wiki/Entropy_\(information_theory\)">Shannon entropy</a> is written as <code>H(X) = –Σ p(x) log₂ p(x)</code>. Entropy is measured in bits and has an intuitive interpretation: one bit of entropy equals the uncertainty of a fair coin toss. For a fair coin, the calculation is straightforward and instructive. Each outcome has probability 0.5, so <code>H = –[0.5·log₂(0.5) + 0.5·log₂(0.5)]</code>. The logarithm <code>log₂(0.5)</code> equals –1, so the inner products are 0.5·(–1) = –0.5 and another –0.5; their sum is –1.0, and the negative of that is 1.0. Thus, a fair coin toss carries exactly 1.0 bit of entropy.</p>
<p>The practical consequence for security is clear: when cryptographers say a source is “random,” they mean that it resists feasible prediction and that its entropy content is well understood and sufficient for the intended use. Entropy estimates guide decisions about seed sizes and key strengths because the number of bits of entropy roughly corresponds to the logarithm of the attacker’s search space. If a key is intended to provide 128 bits of security, the secret material used to derive that key should contain comparable entropy; otherwise, an attacker’s effective effort to brute force or guess the secret is reduced. In short, unpredictability and lack of bias are two sides of the same coin (pun intended), and entropy is the rigorous tool cryptographers use to measure how secure that coin toss really is.</p>
<h3 id="heading-true-randomness-from-the-physical-world">True randomness from the physical world</h3>
<p>Computers are deterministic machines and cannot produce true randomness by themselves, so cryptographers and engineers turn to the messy, noisy processes of the physical world when they need genuine unpredictability. Physical randomness comes from measurements of phenomena that are fundamentally difficult (or impossible) to predict in practice: thermal agitation of electrons in a resistor, the microscopic timing of photons striking a photodiode, the discrete and memoryless events of radioactive decay, or the constant hiss of atmospheric radio noise. Each of these phenomena produces signals that, when observed through sensitive electronics, look like irregular fluctuations rather than the smooth, repeatable patterns that a CPU naturally generates. That irregularity is what we want: entropy that an attacker cannot reproduce or control without physically accessing the device or altering the environment.</p>
<p>Real-world hardware random number generators include not only discrete sensors but entire subsystems engineered for robustness. Designs typically include analog front-ends to amplify noise, high-resolution analog-to-digital converters to sample it, and digital conditioners to remove bias. They also include run-time health checks and continuous statistical monitoring so that the system can detect sensor failures, saturation, or an unexpected shift in distribution and fail safely. For embedded devices or virtual machines that may have limited entropy at startup, engineers add entropy pooling and conservative seeding strategies to avoid predictable initial states, because low entropy at boot time has been the root cause of several high-profile cryptographic failures.</p>
<p>Hardware RNGs are not immune to attack. An attacker with physical access can alter the environment (apply heat, inject signals, or clamp supply voltages) to bias the source, and even remote attackers can sometimes exploit shared hardware effects or firmware flaws to influence entropy. <a target="_blank" href="https://en.wikipedia.org/wiki/Side-channel_attack">Side-channel leaks</a> can reveal partial state, and subtle implementation bugs can turn an ostensibly random process into a predictable one. Mitigations include sensor redundancy, tamper-evident packaging, continuous self-tests, and cryptographic post-processing that assumes part of the input may be under adversarial control while still extracting usable randomness from the uncontaminated portion.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755516138295/b35a0f23-8127-4586-9c2a-d7797d6e7ed7.jpeg" alt="A wall display of colorful lava lamps is visible through a glass window with &quot;Cloudflare&quot; printed on it." class="image--center mx-auto" /></p>
<p><a target="_blank" href="https://blog.cloudflare.com/randomness-101-lavarand-in-production/">Cloudflare</a>’s “lava lamp” setup is one of the most fun and concrete examples of how engineers turn physical chaos into cryptographic strength. At a high level the idea is simple: film analog, hard-to-predict phenomena, convert the imagery into bits, mix that with other entropy sources, and feed the result into a cryptographically secure pseudorandom generator that helps seed keys and other security-critical values. Cloudflare publicly documented the system (called <a target="_blank" href="https://en.wikipedia.org/wiki/Lavarand">LavaRand</a>) and how the camera feed is processed and mixed into their production randomness pool.</p>
<p>The lava-lamp story actually has roots before Cloudflare. The notion of using chaotic visual patterns to harvest entropy goes back to projects like Lavarand from the 1990s and later LavaRnd, which showed that inexpensive webcams and chaotic scenes can produce useful true randomness when properly digitized and post-processed. Cloudflare adopted and industrialized that concept; they point a camera at about a hundred lava lamps in their San Francisco lobby and capture frames continuously, taking advantage of the lamps’ complex, heat-driven fluid dynamics to supply unpredictable variation.</p>
<p>Technically, the pipeline is capture, digitize, condition, and mix. The camera captures images; the pixel data and small sensor noise become raw entropy. That raw data is not used directly. Instead the images are converted into bitstreams, deterministic artifacts and obvious biases are removed, and the result is cryptographically conditioned (for example by hashing or other extractors) before it is mixed into an entropy pool and used to seed or reseed a <a target="_blank" href="https://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator">CSPRNG</a>. Cloudflare’s public posts describe these steps and emphasize that the lava lamps are only one source in a multi-source design: the lamp feed is combined with kernel-level sources and other independent hardware sensors to reduce the risk any single source can be predicted or subverted.</p>
<p>The approach has practical benefits and limitations. The benefit is real, high-quality entropy that is hard for a remote attacker to predict. The practical limitations are bandwidth and engineering complexity: camera-based schemes produce entropy at limited rates compared with purely electronic <a target="_blank" href="https://en.wikipedia.org/wiki/Hardware_random_number_generator">TRNGs</a>, require careful analog and software conditioning, and must run continuous health checks to detect stuck pixels, saturated sensors, or other failures. Designers must assume someone might attempt to bias or observe a physical source, so robust mixing with other independent entropy sources and careful entropy estimation are essential. Cloudflare’s public write-ups and journalists’ interviews discuss the tradeoffs and the precautions taken to keep the system trustworthy.</p>
<p>If you want an approachable demonstration you can try at home: point a webcam at a chaotic visual scene (moving shadows, fire, water ripples, or even a desktop lava lamp), capture short bursts of frames, compute a cryptographic hash of each frame, and treat the hashes as seed material for a local PRNG. For any practical security deployment, however, you need conservative entropy estimation, hardware and software safeguards, mixing of multiple independent sources, and preferably a vetted <a target="_blank" href="https://en.wikipedia.org/wiki/Random_number_generation">RNG</a> design rather than a DIY system for critical uses.</p>
<h3 id="heading-pseudorandomness-how-deterministic-algorithms-imitate-randomness">Pseudorandomness: how deterministic algorithms imitate randomness</h3>
<p>A pseudorandom number generator (<a target="_blank" href="https://en.wikipedia.org/wiki/Pseudorandom_number_generator">PRNG</a>) is a deterministic algorithm that stretches a small chunk of true randomness, called a seed, into a long sequence of values that look random to observers who do not know the seed. The core idea is simple but powerful: instead of relying on expensive or slow physical measurements for every random bit, a system captures a modest amount of entropy once, feeds it into a carefully designed algorithm, and then produces many bits that are suitable for ordinary use. Because the process is deterministic, anyone who knows the seed or the algorithm’s full internal state can reproduce the entire output stream exactly; the security goal of cryptographic designs is therefore to make the internal state infeasible to recover from observing outputs.</p>
<p>Not all PRNGs are created equal. For many applications statistical appearance is enough: the outputs should pass a variety of randomness tests and have long periods so patterns do not repeat within the expected runtime. <a target="_blank" href="https://en.wikipedia.org/wiki/Mersenne_Twister">Mersenne Twister</a> is a classic example used widely for such nonsecurity use cases because it has excellent statistical qualities and a very long period. However, statistical quality alone does not imply security. A generator can pass many statistical tests and still be trivial to predict if an attacker learns its state or if the algorithm has structural weaknesses.</p>
<p>Cryptographically secure PRNGs, or CSPRNGs, add a stronger requirement: even with access to a large amount of output, an attacker should not be able to predict future outputs or recover previous internal state within feasible computational limits. One formal way to state this requirement is the next-bit unpredictability property: no efficient adversary, given the first k bits of the generator’s output, can predict bit k+1 with probability significantly better than one half. A variety of practical CSPRNG constructions realize this property under standard cryptographic assumptions. Two common construction patterns are block-cipher-based counter mode, where a secret key encrypts an increasing counter to produce output blocks, and stream-cipher or hash-based designs, where keyed primitives are used to mix state and generate bits.</p>
<p>Seeding and reseeding are central operational issues. A CSPRNG is only as strong as its seed: if the seed lacks sufficient entropy, the output provides a false sense of security. Good practice takes a conservative approach to entropy estimation, collects seed material from multiple independent sources when possible, and reseeds the CSPRNG periodically or on specific events (for example, when new hardware entropy becomes available, or after a long uptime). Reseeding helps limit the damage if the internal state is ever partially exposed, because later reseeds restore unpredictability for future outputs. Designers also consider forward secrecy and backward secrecy: forward secrecy prevents an attacker who learns the current state from reconstructing past outputs, and backward secrecy (also called recovery) ensures that after reseeding, previous state compromise does not imply future compromise.</p>
<p>Practical CSPRNGs are implemented in many libraries and operating system primitives. Examples you may encounter include <a target="_blank" href="https://en.wikipedia.org/wiki/HMAC">HMAC</a> or hash-based <a target="_blank" href="https://csrc.nist.rip/glossary/term/deterministic_random_bit_generator">DRBGs</a>, <a target="_blank" href="https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation">AES-CTR</a> or <a target="_blank" href="https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation">AES-CBC</a> based DRBGs, and stream cipher constructions such as <a target="_blank" href="https://en.wikipedia.org/wiki/ChaCha20-Poly1305">ChaCha20</a> used as generators. Standards and well-reviewed implementations matter; reinventing your own generator is risky because subtle mathematical or implementation flaws can create catastrophic predictability. Real-world incidents underline this risk: a tiny change in how an operating system seeded its randomness or a single removed line of code has previously reduced effective entropy and allowed attackers to guess supposedly secret keys. Another infamous controversy involved a standardized DRBG whose parameters sparked suspicion of a potential backdoor; that episode taught the community to prefer transparent, auditable constructions and to avoid obscure defaults without scrutiny.</p>
<p>Performance, state size, and period are practical trade-offs. A generator intended for heavy simulation workloads favors very long period and high throughput; a generator intended for cryptographic key material places a stronger emphasis on provable unpredictability and secure state management, possibly tolerating higher computational cost. State compromise extensions and health checks are engineering features that help; implementations often zeroize state on shutdown, mix in unpredictable inputs from system events, and run continuous statistical sanity checks to detect sensor failure or stuck outputs.</p>
<h3 id="heading-linear-congruential-generator">Linear Congruential Generator</h3>
<p>PRNGs are fast, reproducible, and resource-efficient, making them ideal for simulations, games, and randomized algorithms. For security, though, a PRNG must be cryptographically secure. Using a non-cryptographic PRNG for keys, tokens, or session identifiers can lead to predictable secrets and catastrophic breaches. Historical security incidents frequently trace back to insufficiently random seeds or the misuse of standard library PRNGs for sensitive purposes.</p>
<p>A linear congruential generator (LCG) is one of the oldest and simplest pseudorandom number generators. It is defined by a single recurrence relation that updates an internal integer state and emits the next value as that state (or a function of it). The recurrence looks compact and innocent:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1755517005796/06809938-bd0c-4d92-823d-7c4680543b7c.png" alt class="image--center mx-auto" /></p>
<p>Here m is the modulus, <code>a</code> the multiplier, <code>c</code> the increment, and <code>X0</code> the seed. Despite the short formula, the <a target="_blank" href="https://en.wikipedia.org/wiki/Linear_congruential_generator">LCG</a> illustrates many of the fundamental trade-offs between simplicity, performance, statistical quality, and security.</p>
<p>Understanding why an LCG sometimes works well and sometimes fails begins with its parameters. The modulus mmm sets the numeric range of outputs: each <code>Xn</code> lies in the set <code>{0,1,…,m−1}</code>. The multiplier <code>a</code> and increment <code>c</code> determine how the sequence moves around that range. If <code>c</code> is zero, the generator is called a multiplicative LCG; if c is nonzero it is called mixed. The seed <code>X0</code>​ determines the specific sequence you will get; two different seeds can lead to the same eventual cycle or to different cycles depending on parameters.</p>
<p>Beyond period, LCGs exhibit other structural weaknesses that limit their usefulness. One intuitive problem is that output points from an LCG in multiple dimensions lie on a relatively small number of hyperplanes. If you take successive output values and form tuples such as <code>(Xn,Xn+1,Xn+2)</code>, those points do not fill the 3D space uniformly but instead fall on parallel planes. This lattice structure is formalized by the spectral test, which measures how well the generator fills multiple-dimensional space; LCGs frequently score poorly on this test unless parameters are carefully tuned. Another practical problem arises when the modulus mmm is a power of two, a very common implementation choice because modular reduction is then very cheap in binary machines. In that case low-order bits of the generated numbers have particularly short periods and poor randomness properties; the least significant bit simply toggles if <code>a</code> and <code>c</code> are odd, making low bits unsuitable for direct use without further processing.</p>
<p>Nevertheless, LCGs remain useful in contexts that do not require cryptographic unpredictability. Their implementation is tiny, they are fast, and when properly parameterized they can provide repeatable pseudorandom streams that are good enough for many simulations and nonsecurity randomness needs. Classic textbooks and libraries historically used carefully chosen LCG parameters (for example, values recommended by early numerical libraries) to balance period and statistical behavior. Even so, more modern general-purpose generators such as the Mersenne Twister, PCG (Permuted Congruential Generator), and <a target="_blank" href="https://en.wikipedia.org/wiki/Xorshift">Xoshiro/XSAdd</a> families outperform simple LCGs in statistical tests while retaining speed and convenience; those generators address many of the lattice and low-bit problems by combining linear recurrences with nonlinear output transformations.</p>
<p>If you want to experiment with an LCG, a short Python implementation illustrates the idea and helps you observe its properties in practice. The following snippet implements a simple LCG and demonstrates the cycle detection logic:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">lcg</span>(<span class="hljs-params">a, c, m, seed, n</span>):</span>
    x = seed
    <span class="hljs-keyword">for</span> _ <span class="hljs-keyword">in</span> range(n):
        x = (a * x + c) % m
        <span class="hljs-keyword">yield</span> x

<span class="hljs-string">"""
Example parameters
prints [3, 7, 6, 4, 0, 1, 3, 7, 6, 4]
"""</span>
a, c, m, seed = <span class="hljs-number">2</span>, <span class="hljs-number">1</span>, <span class="hljs-number">9</span>, <span class="hljs-number">1</span>
print(list(lcg(a, c, m, seed, <span class="hljs-number">10</span>)))
</code></pre>
<p>That output shows the short repeating cycle. To check for period exhaustively, iterate until the state repeats and count steps, but be careful with large moduli: naive period detection can be slow if mmm is large.</p>
<p>In practice, if you must use an LCG in production for nonsecurity tasks, choose parameters with care and be aware of the generator’s documented limitations. Prefer a large odd modulus (often a large prime or an odd number with few small prime factors) and multipliers that meet known theoretical criteria for long periods. Avoid using raw low-order bits from power-of-two moduli. Consider combining the LCG output with a nonlinear post-processing step to reduce lattice artifacts, or better yet use a modern generator designed to provide better statistical guarantees for the same or similar computational cost.</p>
<p>The linear congruential generator is an elegant and historically important algorithm: it is easy to understand, cheap to compute, and it teaches fundamental lessons about pseudorandomness. Its linearity, however, is also its fundamental limitation. For simulation, visualization, and educational experiments it is a useful tool and a clean introduction to pseudorandom generators; for cryptography and high-stakes randomness it is the wrong tool and should be replaced by a vetted CSPRNG or a modern high-quality PRNG.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>Randomness underlies everything in cryptography. True physical sources provide the raw unpredictability, and cryptographic pseudorandom generators convert that unpredictability into practical, high-volume randomness suitable for secure systems. Developers should always use platform CSPRNGs for secrets, validate entropy sources on constrained devices, and avoid the temptation to roll their own generators. When randomness is done right, your cryptography stands on firm ground; when it is done poorly, the rest of the system is at risk.</p>
]]></content:encoded></item><item><title><![CDATA[How to Generate Browser Test Automation Scripts Using Chrome DevTools Recorder and Generative AI]]></title><description><![CDATA[In the modern software development lifecycle, test automation has shifted from being a nice-to-have to a fundamental part of CI/CD pipelines. Writing reliable and maintainable automated tests—whether using Selenium or Playwright—requires not only a d...]]></description><link>https://articles.eminmuhammadi.com/how-to-generate-browser-test-automation-scripts-using-chrome-devtools-recorder-and-generative-ai</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/how-to-generate-browser-test-automation-scripts-using-chrome-devtools-recorder-and-generative-ai</guid><category><![CDATA[AI]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[Testing]]></category><category><![CDATA[software development]]></category><category><![CDATA[playwright]]></category><category><![CDATA[selenium]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Tue, 13 May 2025 11:27:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1747135536357/22d2ab55-1c9d-4e5d-b77c-47e21cac7651.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the modern software development lifecycle, test automation has shifted from being a nice-to-have to a fundamental part of CI/CD pipelines. Writing reliable and maintainable automated tests—whether using Selenium or Playwright—requires not only a deep understanding of web technologies but also efficient tooling to bridge the gap between manual test scenarios and automated scripts.</p>
<p>One powerful yet underutilized method for accelerating this process involves leveraging <a target="_blank" href="https://developer.chrome.com/docs/devtools/recorder"><strong>Chrome DevTools' Recorder</strong></a> in conjunction with <a target="_blank" href="https://chat.openai.com/"><strong>ChatGPT</strong></a> to rapidly generate high-fidelity automation scripts. This integrated approach enables QA engineers and developers to capture actual browser interactions and convert them into well-structured test automation code, saving hours of manual scripting and reducing the risk of locator errors or missed steps.</p>
<p>In this article, I will walk you through how to seamlessly integrate Chrome DevTools with ChatGPT to write accurate and production-grade Playwright or Selenium automation scripts, based on recorded browser interactions. This technique is especially useful when dealing with complex forms, authentication flows, or any scenario where manually identifying DOM elements is tedious and error-prone.</p>
<h2 id="heading-understanding-the-chrome-devtools-recorder">Understanding the Chrome DevTools Recorder</h2>
<p>The Recorder panel in Chrome DevTools is designed to capture user flows as they interact with a web application. Introduced in Chrome 97 and continually improved, it provides a visual interface for recording actions such as clicks, text input, key presses, and navigations. Each interaction is stored in a structured JSON format, preserving metadata like element selectors (<code>data-test</code>, <code>aria-label</code>, XPath), timing offsets, and keystrokes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747136038420/8f715952-41aa-49dc-a23a-79c182b6214b.gif" alt class="image--center mx-auto" /></p>
<p>To see this workflow in action, watch this video on YouTube, where we walk through the entire process of generating a test automation script using Chrome DevTools Recorder</p>
<p>This JSON format offers a detailed and deterministic view of the user’s journey, including the exact element paths used during interaction. While Chrome offers native options to export recordings into Puppeteer or Lighthouse flows, the real potential is unlocked when this data is handed off to an intelligent agent—ChatGPT—for transformation into the automation framework of your choice.</p>
<p>These features make the Recorder a perfect companion for <strong>generative AI tools</strong>, which can consume the structured JSON output and convert it into fully functioning automation code in frameworks like <strong>Playwright</strong> or <strong>Selenium</strong>.</p>
<p>By capturing the actual DOM selectors, event types, and timing data, Chrome DevTools Recorder provides a highly accurate blueprint of user behavior. When combined with generative AI models, this blueprint can be transformed into clean, structured, and production-ready test automation scripts—dramatically reducing the manual effort typically involved in writing such scripts from scratch.</p>
<h2 id="heading-recording-the-test-flow-in-chrome">Recording the Test Flow in Chrome</h2>
<p>To begin, open your target web application in Chrome and press <code>F12</code> to open DevTools. Navigate to the “Recorder” tab, which may need to be enabled via DevTools experiments in some versions.</p>
<p>Once inside the Recorder panel, click “Start recording” and begin interacting with your application as an end user would—fill out forms, click buttons, navigate between pages, and complete transactions. Every action is tracked in real-time. When you finish the test flow, click “Stop recording.”</p>
<p>You can now export the flow as a JSON file. This file contains a chronological list of interaction steps, complete with associated selectors, element hierarchy, and expected behaviors. It essentially serves as the raw material for an automation script—human-readable, structured, and ready for transformation.</p>
<h2 id="heading-involving-chatgpt-turning-interactions-into-code">Involving ChatGPT: Turning Interactions into Code</h2>
<p>Once the JSON file is saved, the next step is to leverage ChatGPT to convert it into executable code. This process typically involves uploading the JSON file to ChatGPT or pasting its contents into a prompt with an instruction such as:</p>
<blockquote>
<p>"Convert this Chrome Recorder JSON into a Selenium Java test script that logs in, adds an item to cart, and completes checkout."</p>
</blockquote>
<p>ChatGPT will analyze the structure of the JSON, extract meaningful interactions, map selectors to automation-friendly syntax (like <code>By.cssSelector</code> or <code>page.locator()</code>), and generate a script in the target language and framework.</p>
<p>For Selenium, the script will include WebDriver initialization, wait strategies (if requested), interaction with fields and buttons, and assertions. For Playwright, it will typically include context setup, navigation, and interaction logic with the <code>page</code> object, reflecting the same steps captured in Chrome.</p>
<p>Because the JSON contains detailed selectors and values, ChatGPT is able to use reliable locators—<code>data-test</code>, <code>aria-label</code>, and descriptive XPath expressions—directly from the flow, thus minimizing test fragility caused by dynamic class names or unstable DOM structures.</p>
<h2 id="heading-refining-the-generated-script">Refining the Generated Script</h2>
<p>While AI-generated automation scripts from tools like ChatGPT or other generative platforms are often executable with minimal intervention, a raw script alone is rarely sufficient for enterprise-grade testing. To ensure long-term maintainability, reliability, and alignment with team standards, it is essential that QA engineers <strong>review, refactor, and enhance</strong> these scripts before integrating them into test suites or CI/CD pipelines.</p>
<p>Here is a breakdown of <strong>key refinement areas</strong>, with detailed guidance for each.</p>
<h3 id="heading-replace-static-waits-with-explicit-waits"><strong>Replace Static Waits with Explicit Waits</strong></h3>
<p><strong>Problem:</strong><br />AI-generated scripts often use <strong>static delays</strong> such as <code>Thread.sleep(2000)</code> or <code>page.waitForTimeout(3000)</code> to handle asynchronous loading. This is a major anti-pattern in test automation because it introduces unnecessary delays and causes flakiness, especially in slow or variable environments.</p>
<p><strong>Solution:</strong><br />Replace static waits with <strong>explicit waits</strong> that wait only as long as necessary for a specific condition to be met.</p>
<pre><code class="lang-java">WebDriverWait wait = <span class="hljs-keyword">new</span> WebDriverWait(driver, Duration.ofSeconds(<span class="hljs-number">10</span>));
wait.until(ExpectedConditions.visibilityOfElementLocated(By.id(<span class="hljs-string">"checkout-button"</span>)));
</code></pre>
<pre><code class="lang-javascript"><span class="hljs-keyword">await</span> page.locator(<span class="hljs-string">'#checkout-button'</span>).waitFor({ <span class="hljs-attr">state</span>: <span class="hljs-string">'visible'</span> });
</code></pre>
<p><strong>Why it matters:</strong><br />Explicit waits improve script efficiency, reduce false negatives, and make tests more resilient to real-world conditions.</p>
<h3 id="heading-add-assertions-to-validate-expected-behaviors"><strong>Add Assertions to Validate Expected Behaviors</strong></h3>
<p><strong>Problem:</strong><br />Generated scripts usually focus on <em>performing actions</em> (e.g., clicking buttons, entering data) but often <strong>lack validations</strong> to ensure that the outcome of those actions is as expected.</p>
<p><strong>Solution:</strong><br />Incorporate <strong>assertions</strong> after critical steps to verify application state, such as URL changes, UI updates, text visibility, or data correctness.</p>
<pre><code class="lang-java">assertEquals(<span class="hljs-string">"Thank you for your order!"</span>, driver.findElement(By.className(<span class="hljs-string">"complete-header"</span>)).getText());
</code></pre>
<pre><code class="lang-javascript"><span class="hljs-keyword">await</span> expect(page.locator(<span class="hljs-string">'.complete-header'</span>)).toHaveText(<span class="hljs-string">'Thank you for your order!'</span>);
</code></pre>
<p><strong>Why it matters:</strong><br />Assertions transform your script from a passive flow into a true <strong>test case</strong> by verifying that business requirements are met.</p>
<h3 id="heading-abstract-repeated-actions-into-reusable-methods-or-page-object-classes"><strong>Abstract Repeated Actions into Reusable Methods or Page Object Classes</strong></h3>
<p><strong>Problem:</strong><br />Generated code often duplicates common operations—such as login sequences, adding products to the cart, or checking out—leading to <strong>code redundancy</strong> and poor maintainability.</p>
<p><strong>Solution:</strong><br />Apply the <strong>Page Object Model (POM)</strong> or <strong>custom helper functions</strong> to encapsulate and reuse logic across multiple test cases.</p>
<pre><code class="lang-java"><span class="hljs-keyword">public</span> <span class="hljs-class"><span class="hljs-keyword">class</span> <span class="hljs-title">LoginPage</span> </span>{
    WebDriver driver;

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-title">LoginPage</span><span class="hljs-params">(WebDriver driver)</span> </span>{
        <span class="hljs-keyword">this</span>.driver = driver;
    }

    <span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">login</span><span class="hljs-params">(String username, String password)</span> </span>{
        driver.findElement(By.id(<span class="hljs-string">"user-name"</span>)).sendKeys(username);
        driver.findElement(By.id(<span class="hljs-string">"password"</span>)).sendKeys(password);
        driver.findElement(By.id(<span class="hljs-string">"login-button"</span>)).click();
    }
}
</code></pre>
<p><strong>Why it matters:</strong><br />Abstraction reduces duplication, simplifies test maintenance, and enhances test scalability in large test suites.</p>
<h3 id="heading-parameterize-inputs-for-data-driven-testing"><strong>Parameterize Inputs for Data-Driven Testing</strong></h3>
<p><strong>Problem:</strong><br />Scripts generated from a fixed recording often contain <strong>hardcoded values</strong> (e.g., usernames, product names), which limit test reusability and flexibility.</p>
<p><strong>Solution:</strong><br />Use <strong>data-driven testing patterns</strong> to feed dynamic values into the test, either from CSV, Excel, JSON, or test frameworks like JUnit/TestNG (Selenium) or Playwright’s test configuration files.</p>
<pre><code class="lang-java"><span class="hljs-meta">@DataProvider(name = "loginData")</span>
<span class="hljs-keyword">public</span> Object[][] getData() {
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> Object[][] { { <span class="hljs-string">"standard_user"</span>, <span class="hljs-string">"secret_sauce"</span> }, { <span class="hljs-string">"locked_out_user"</span>, <span class="hljs-string">"secret_sauce"</span> } };
}

<span class="hljs-meta">@Test(dataProvider = "loginData")</span>
<span class="hljs-function"><span class="hljs-keyword">public</span> <span class="hljs-keyword">void</span> <span class="hljs-title">testLogin</span><span class="hljs-params">(String username, String password)</span> </span>{
    loginPage.login(username, password);
    <span class="hljs-comment">// Add assertions</span>
}
</code></pre>
<pre><code class="lang-javascript">test.describe.configure({ <span class="hljs-attr">mode</span>: <span class="hljs-string">'parallel'</span> });

test(<span class="hljs-string">'Login with different users'</span>, <span class="hljs-keyword">async</span> ({ page }) =&gt; {
  <span class="hljs-keyword">const</span> credentials = [{ <span class="hljs-attr">user</span>: <span class="hljs-string">'standard_user'</span>, <span class="hljs-attr">pass</span>: <span class="hljs-string">'secret_sauce'</span> }];
  <span class="hljs-keyword">for</span> (<span class="hljs-keyword">const</span> cred <span class="hljs-keyword">of</span> credentials) {
    <span class="hljs-keyword">await</span> page.goto(<span class="hljs-string">'https://www.saucedemo.com/'</span>);
    <span class="hljs-keyword">await</span> page.fill(<span class="hljs-string">'#user-name'</span>, cred.user);
    <span class="hljs-keyword">await</span> page.fill(<span class="hljs-string">'#password'</span>, cred.pass);
    <span class="hljs-keyword">await</span> page.click(<span class="hljs-string">'#login-button'</span>);
    <span class="hljs-comment">// Assertions here</span>
  }
});
</code></pre>
<h3 id="heading-integrate-the-script-into-a-test-runner-framework"><strong>Integrate the Script into a Test Runner Framework</strong></h3>
<p><strong>Problem:</strong><br />Even well-written scripts are of limited use if they are not integrated into a <strong>test execution framework</strong>. Raw scripts lack reporting, setup/teardown hooks, and test grouping.</p>
<p><strong>Solution:</strong><br />Integrate the script into a proper test runner:</p>
<ul>
<li><p>Use <strong>JUnit or TestNG</strong> for Java-based Selenium tests.</p>
</li>
<li><p>Use <strong>Playwright Test</strong> for JavaScript/TypeScript tests.</p>
</li>
<li><p>Add configuration files (<code>testng.xml</code>, <code>playwright.config.ts</code>) for parallel execution, environment setup, and reporter integration.</p>
</li>
</ul>
<p><strong>Bonus Enhancements:</strong></p>
<ul>
<li><p>Add <strong>HTML or Allure reports</strong> for test result visualization.</p>
</li>
<li><p>Use <strong>CI tools</strong> like GitHub Actions, Jenkins, or Azure DevOps to automate test execution after every commit or build.</p>
</li>
</ul>
<p><strong>Why it matters:</strong><br />Framework integration allows your scripts to become part of a sustainable and automated quality strategy—enabling regression testing, alerting, and test trend analysis.</p>
<h2 id="heading-real-world-application-from-demo-to-production">Real-World Application: From Demo to Production</h2>
<p>Consider a scenario where a QA engineer needs to automate the end-to-end checkout flow of an e-commerce application. Traditionally, this would involve inspecting each page, writing locators manually, and scripting the sequence of actions. Using this integrated approach, the engineer simply records the flow in Chrome—navigating through login, product selection, cart review, and checkout—and hands off the JSON to ChatGPT for conversion.</p>
<p>The result is a complete automation script that mimics the real user experience with precision, generated in minutes rather than hours.</p>
<p>This workflow is particularly effective in agile environments where requirements change frequently and test coverage must evolve rapidly. By capturing real user interactions and turning them into code, QA teams can adapt faster, reduce manual effort, and improve test reliability.</p>
<hr />
<h3 id="heading-automating-the-e-commerce-checkout-flow">Automating the E-commerce Checkout Flow</h3>
<p>The QA team is responsible for validating the <strong>checkout process</strong> of an e-commerce web application. The test case involves:</p>
<ol>
<li><p>Logging into the platform.</p>
</li>
<li><p>Selecting a product from the catalog.</p>
</li>
<li><p>Adding it to the shopping cart.</p>
</li>
<li><p>Proceeding to checkout.</p>
</li>
<li><p>Entering payment and shipping information.</p>
</li>
<li><p>Confirming the order and validating the success message.</p>
</li>
</ol>
<p>Traditionally, building this flow manually would require:</p>
<ul>
<li><p>Inspecting dozens of HTML elements.</p>
</li>
<li><p>Writing complex selectors.</p>
</li>
<li><p>Managing asynchronous behavior with waits.</p>
</li>
<li><p>Implementing assertions and reusable methods.</p>
</li>
<li><p>Testing and debugging each step iteratively.</p>
</li>
</ul>
<p>Using the <strong>Recorder + AI workflow</strong>, the process becomes streamlined and user-driven.</p>
<h3 id="heading-step-1-record-the-flow-with-chrome-devtools-recorder">Step 1: Record the Flow with Chrome DevTools Recorder</h3>
<p>Chrome DevTools Recorder allows QA engineers to record real interactions with the web application—capturing user behavior as a sequence of DOM events and metadata.</p>
<h4 id="heading-instructions">Instructions:</h4>
<ol>
<li><p><strong>Open Chrome</strong> and navigate to your e-commerce test environment.</p>
</li>
<li><p>Open <strong>DevTools</strong> (Right-click → Inspect or press <code>F12</code>) and switch to the <strong>Recorder</strong> tab.</p>
</li>
<li><p>Click <strong>"Start new recording"</strong> and name it (e.g., <code>CheckoutFlow</code>).</p>
</li>
<li><p>Perform the following actions in the browser:</p>
<ul>
<li><p>Enter login credentials and click <strong>Login</strong>.</p>
</li>
<li><p>Browse products and click <strong>Add to Cart</strong> on one.</p>
</li>
<li><p>Go to the cart and click <strong>Checkout</strong>.</p>
</li>
<li><p>Enter dummy address and payment details.</p>
</li>
<li><p>Click <strong>Place Order</strong> and wait for the confirmation screen.</p>
</li>
</ul>
</li>
<li><p>Once finished, return to the Recorder and click <strong>"Stop"</strong>.</p>
</li>
<li><p>Click the <strong>three-dot menu (⋮)</strong> next to your recording and choose <strong>Export as JSON</strong>.</p>
</li>
<li><p>Save the <code>.json</code> file—this file contains all recorded steps, selectors, and timings.</p>
</li>
</ol>
<p><strong>Result:</strong> You now have a structured, machine-readable file representing the exact user flow.</p>
<h3 id="heading-step-2-convert-the-recording-into-automation-code-using-generative-ai">Step 2: Convert the Recording into Automation Code Using Generative AI</h3>
<p>With the JSON export ready, the next step is to transform the recorded flow into executable test automation code. This is where <strong>Generative AI</strong> like ChatGPT comes into play.</p>
<h4 id="heading-instructions-1">Instructions:</h4>
<ol>
<li><p>Open ChatGPT (or another LLM-powered assistant).</p>
</li>
<li><p>Upload the exported JSON recording.</p>
</li>
<li><p>Provide a clear instruction prompt such as:</p>
</li>
</ol>
<blockquote>
<p>“Convert this Chrome DevTools Recorder JSON into a Playwright/Selenium test script in JavaScript/Java. Include proper waits and basic assertions.”</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747135239716/d46e5a4f-b592-4055-b466-8385d51aa8eb.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747135285794/17b1fcb6-7501-42a6-97c4-f595114f28be.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1747135305581/440b1cd3-b0ca-4954-847d-089077f3590d.png" alt class="image--center mx-auto" /></p>
</blockquote>
<ol start="4">
<li><p>The AI will parse the JSON and generate code that reproduces the entire user flow, using selectors and timing information from the recording.</p>
</li>
<li><p>Review the output. Optionally, ask ChatGPT to:</p>
<ul>
<li><p>Replace static waits with <code>waitForSelector</code> or <code>WebDriverWait</code>.</p>
</li>
<li><p>Add assertions to verify cart contents or order confirmation.</p>
</li>
<li><p>Wrap steps into reusable methods or page objects.</p>
</li>
</ul>
</li>
</ol>
<p><strong>Result:</strong> A ready-to-run script that replicates the entire checkout process, complete with selectors and actions pulled directly from real browser usage.</p>
<h3 id="heading-step-3-validate-and-refine-the-script">Step 3: Validate and Refine the Script</h3>
<p>Although the AI-generated script is functional, a senior QA engineer should still <strong>review and refine</strong> it to align with the team’s test architecture. This includes:</p>
<ul>
<li><p><strong>Verifying selectors</strong> and modifying them if they are too brittle (e.g., based on text content or dynamic class names).</p>
</li>
<li><p><strong>Adding assertions</strong> to validate critical checkpoints, such as:</p>
<ul>
<li><p>Ensuring the user lands on the homepage after login.</p>
</li>
<li><p>Checking that the cart count updates correctly.</p>
</li>
<li><p>Confirming that a success message appears after checkout.</p>
</li>
</ul>
</li>
<li><p><strong>Parameterizing</strong> login credentials and product selections to support data-driven tests.</p>
</li>
<li><p><strong>Encapsulating</strong> steps into reusable classes using the Page Object Model (POM).</p>
</li>
<li><p><strong>Integrating</strong> the script into a test runner like <strong>JUnit</strong>, <strong>TestNG</strong>, or <strong>Playwright Test</strong>.</p>
</li>
<li><p><strong>Running in CI/CD</strong> environments using GitHub Actions, Jenkins, or CircleCI.</p>
</li>
</ul>
<p><strong>Result:</strong> A clean, modular, and robust automation script that is CI-ready and production-grade.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Integrating <strong>Chrome DevTools Recorder</strong> with <strong>ChatGPT</strong> represents a powerful paradigm shift in how test automation scripts are authored. It bridges the gap between manual testing intuition and automation efficiency, making test creation more accessible, faster, and grounded in real usage scenarios.</p>
<p>This approach not only saves time but also empowers teams to maintain higher quality standards across rapidly evolving applications. As AI continues to evolve, workflows like this will become essential tools in the modern QA toolkit—allowing us to focus more on strategy, architecture, and resilience, and less on the mechanics of boilerplate scripting.</p>
]]></content:encoded></item><item><title><![CDATA[AI-Driven Software Testing: Benefits, Challenges, and Future Trends]]></title><description><![CDATA[The integration of Artificial Intelligence (AI) into software testing is transforming quality assurance (QA) practices. By automating complex tasks, predicting potential issues, and adapting to evolving software environments, AI is enhancing the effi...]]></description><link>https://articles.eminmuhammadi.com/ai-driven-software-testing-benefits-challenges-and-future-trends</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/ai-driven-software-testing-benefits-challenges-and-future-trends</guid><category><![CDATA[AI]]></category><category><![CDATA[#ai-tools]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[QA engineering]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Fri, 18 Apr 2025 18:07:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1744999305219/585cc937-8a7c-4836-a786-b0f2849828d6.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The integration of Artificial Intelligence (AI) into software testing is transforming quality assurance (QA) practices. By automating complex tasks, predicting potential issues, and adapting to evolving software environments, AI is enhancing the efficiency and effectiveness of testing processes. This article delves into the benefits, challenges, and future trends of AI-driven software testing, providing insights into how AI is reshaping the QA landscape.</p>
<p>Artificial intelligence has transitioned from a theoretical concept to a practical tool within the realm of software testing. Its ability to analyze vast datasets, learn from patterns, and make informed decisions positions AI as an invaluable asset in QA processes. Unlike traditional automation, which relies on predefined scripts, AI-driven testing adapts to changes, predicts potential issues, and continuously improves through learning mechanisms. This dynamic approach addresses the limitations of manual testing and static automation, offering a more resilient and responsive testing framework.​</p>
<h3 id="heading-benefits-of-ai-in-software-testing"><strong>Benefits of AI in Software Testing</strong></h3>
<p>AI brings numerous advantages to software testing, significantly improving the speed, accuracy, and coverage of QA processes.​</p>
<p>Enhanced Test Coverage and Accuracy: AI testing tools can process large amounts of data and identify complex patterns that manual testing might miss. This comprehensive analysis ensures thorough testing of applications, reducing the risk of undetected issues. ​</p>
<p>Faster Test Execution: By automating repetitive tasks such as regression and functional testing, AI accelerates the testing process. Machine learning algorithms can predict which parts of the code are most likely to fail, allowing testers to prioritize critical areas. ​</p>
<p>Continuous Testing and Integration: AI facilitates continuous testing by integrating with Continuous Integration/Continuous Deployment (CI/CD) pipelines. This integration ensures that every code update is automatically tested in real-time, maintaining software stability throughout development. ​</p>
<p>Improved Defect Prediction and Prevention: By analyzing historical test data, AI can identify patterns that lead to failures, enabling proactive defect prevention. This predictive capability reduces the time and resources spent on fixing issues post-release. ​</p>
<p>Efficient Resource Utilization: AI optimizes resource usage by automating the creation, execution, and analysis of test cases. This automation allows testers to focus on more complex tasks, enhancing overall productivity. ​</p>
<h3 id="heading-challenges-of-implementing-ai-in-testing"><strong>Challenges of Implementing AI in Testing</strong></h3>
<p>Despite its advantages, integrating AI into software testing presents several challenges that organizations must address.​ High Initial Investment: Implementing AI-powered testing tools requires significant upfront investment in terms of time, money, and resources. This can be a barrier for smaller organizations. ​</p>
<p>Complexity in Setup and Maintenance: Setting up AI-based testing systems is complex and requires specialized skills. Maintaining these systems to adapt to new technologies and changing requirements can also be challenging. ​</p>
<p>Data Dependency: AI algorithms rely heavily on data to function effectively. Inaccurate, incomplete, or biased data can lead to incorrect results, compromising software quality. Lack of Standardization: The absence of standardized AI testing tools and frameworks can make it difficult for organizations to choose the best solutions, leading to inconsistent test results.</p>
<p>Ethical and Security Concerns: The use of AI in testing raises concerns about data privacy, security, and ethical considerations. Ensuring that AI-based testing adheres to ethical guidelines and protects sensitive information is crucial. ​</p>
<h3 id="heading-future-trends-in-ai-driven-software-testing"><strong>Future Trends in AI-Driven Software Testing</strong></h3>
<p>The future of software testing is poised to be significantly influenced by advancements in AI technologies.​</p>
<p>AI-Supported Test Case Creation: AI will increasingly generate test cases based on user behavior, making tests more accurate and relevant. ​Self-Healing Test Automation: AI testing tools will automatically adjust to changes in the software, reducing the need for manual updates and ensuring the continued effectiveness of test cases. ​</p>
<p>Increased Use of Natural Language Processing (NLP): NLP will enhance software testing by enabling systems to understand and process human language, simplifying test creation and improving communication between testers and AI tools. ​AI-Driven Security Testing: As cyber threats become more sophisticated, AI will play a larger role in security testing by identifying vulnerabilities and potential attack vectors more effectively. ​</p>
<p>Integration with DevOps and Agile Methodologies: AI will continue to integrate with DevOps and Agile practices, facilitating faster and more efficient software development cycles while maintaining high-quality standards. ​</p>
]]></content:encoded></item><item><title><![CDATA[Defect Life Cycle in Software Testing: Stages, Process and Best Practices]]></title><description><![CDATA[Software quality assurance is an essential aspect of the development process, ensuring that applications meet user expectations and function as intended. One of the core elements of software testing is defect management, which involves identifying, t...]]></description><link>https://articles.eminmuhammadi.com/defect-life-cycle-in-software-testing-stages-process-and-best-practices</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/defect-life-cycle-in-software-testing-stages-process-and-best-practices</guid><category><![CDATA[Bug Reporting]]></category><category><![CDATA[Defect life cycle]]></category><category><![CDATA[QA engineering]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[defect]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Tue, 04 Mar 2025 12:52:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1741092632782/b9e8cb5e-e1d0-4d78-bd62-ef26d764ca7a.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Software quality assurance is an essential aspect of the development process, ensuring that applications meet user expectations and function as intended. One of the core elements of software testing is defect management, which involves identifying, tracking, and resolving defects. The <strong>defect life cycle</strong>, also known as the <strong>bug life cycle</strong>, defines the systematic journey of a defect from discovery to closure. Understanding this life cycle helps software teams optimize debugging efforts, improve efficiency, and enhance software quality.</p>
<p>The defect life cycle represents the sequence of states a defect undergoes from its initial discovery to its final resolution. Each defect is tracked using a <strong>defect tracking system (DTS)</strong> or <strong>bug tracking tool</strong>, such as JIRA, Bugzilla, or Azure DevOps. Properly managing this cycle ensures that issues are addressed promptly and efficiently, reducing technical debt and improving the overall reliability of software applications.</p>
<h2 id="heading-stages-of-the-defect-life-cycle">Stages of the Defect Life Cycle</h2>
<ol>
<li><p><strong>New</strong>: A defect is first reported when a tester discovers an issue during the software testing process. It is logged in the defect tracking system with necessary details, including the test environment, steps to reproduce, expected vs. actual results, severity, priority, and supporting attachments like screenshots or logs.</p>
</li>
<li><p><strong>Assigned</strong>: Once the defect is reviewed and validated, it is assigned to a developer or a development team member responsible for investigating and fixing the issue. At this stage, the defect's priority and severity are assessed to determine how quickly it needs to be addressed.</p>
</li>
<li><p><strong>Open</strong>: The developer analyzes the defect and begins working on resolving it. This may involve debugging, reviewing code, and collaborating with other team members to understand the root cause. The defect remains in the "Open" state until a resolution is proposed.</p>
</li>
<li><p><strong>Rejected</strong>: A defect may be rejected if it is determined to be invalid, caused by incorrect test execution, or due to a misunderstanding of system requirements. Developers may also reject defects if they cannot reproduce the reported issue. If rejected, the defect status is updated with an explanation.</p>
</li>
<li><p><strong>Duplicate</strong>: If the reported defect already exists in the tracking system, it is marked as a duplicate. Duplicate defects are linked to the original defect entry, preventing redundancy and ensuring that all related defects are addressed collectively.</p>
</li>
<li><p><strong>Deferred</strong>: Sometimes, a defect may not be immediately addressed due to its low priority, limited impact, or dependency on other features. In such cases, it is deferred for a future release and revisited later.</p>
</li>
<li><p><strong>Fixed</strong>: After analyzing and debugging the defect, the developer applies a fix and updates the status to "Fixed." The modified code is then pushed to a test environment for further validation.</p>
</li>
<li><p><strong>Retesting</strong>: The testing team verifies the fix by re-executing the test cases. The goal is to ensure that the defect has been successfully resolved and that the functionality now works as expected. If the defect persists, the tester reopens it.</p>
</li>
<li><p><strong>Reopened</strong>: If the defect is not resolved correctly, it is reopened and reassigned to the developer for further analysis and fixing. This process continues until the issue is completely resolved.</p>
</li>
<li><p><strong>Verified</strong>: If the defect is successfully fixed and passes all required tests, the tester marks it as "Verified." This indicates that the issue is no longer present and the fix is effective.</p>
</li>
<li><p><strong>Closed</strong>: Once the defect is verified and no longer requires further action, it is marked as "Closed." This signifies that the issue has been resolved, tested, and approved for deployment.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1741363260613/66d906ae-6a09-4dba-8c6c-0b175b6eee4b.jpeg" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-impact-of-agile-devops-and-ai-on-the-defect-life-cycle">The Impact of Agile, DevOps, and AI on the Defect Life Cycle</h2>
<p>Modern software development practices, such as Agile and DevOps, have influenced the defect life cycle by introducing faster feedback loops and continuous integration processes. In Agile environments, defects are often addressed within the same sprint, reducing the risk of long-standing issues. Automation tools further streamline defect management by detecting issues early and facilitating quicker resolutions.</p>
<h3 id="heading-agile-and-devops-integration">Agile and DevOps Integration</h3>
<p>Agile and DevOps methodologies have significantly changed the defect life cycle by promoting continuous integration and faster feedback loops. In Agile development, defects are often addressed within the same sprint, minimizing delays. DevOps practices, including automated testing and continuous deployment, enable teams to detect and fix defects in real time.</p>
<h3 id="heading-ai-driven-defect-prediction">AI-Driven Defect Prediction</h3>
<p>Artificial Intelligence (AI) and Machine Learning (ML) are revolutionizing defect management by offering predictive analytics. AI-based defect tracking tools analyze historical data to anticipate defects and suggest preventive measures. This proactive approach enhances software quality and reduces defect recurrence.</p>
<h3 id="heading-shift-left-testing">Shift-Left Testing</h3>
<p>Shift-left testing is a modern practice where testing is conducted earlier in the software development life cycle. By catching defects at an early stage, organizations can significantly reduce the cost and effort required to fix issues in later phases. This approach emphasizes automated unit testing, static code analysis, and early integration testing.</p>
<h3 id="heading-automated-defect-tracking-and-reporting">Automated Defect Tracking and Reporting</h3>
<p>Advanced defect tracking tools now feature automation capabilities that streamline reporting, prioritization, and resolution. Integration with CI/CD pipelines ensures that defects are automatically logged and assigned to developers, improving efficiency and reducing manual effort.</p>
<h3 id="heading-enhanced-collaboration-with-cloud-based-tools">Enhanced Collaboration with Cloud-Based Tools</h3>
<p>Cloud-based defect tracking systems enable distributed teams to collaborate effectively. Tools like JIRA, Azure DevOps, and TestRail facilitate real-time defect tracking, reporting, and resolution management across multiple teams and geographical locations.</p>
<h2 id="heading-best-practices-for-managing-the-defect-life-cycle">Best Practices for Managing the Defect Life Cycle</h2>
<ul>
<li><p><strong>Use a Standardized Defect Template</strong>: Clearly define and document defects with all necessary details, including severity, priority, test environment, and reproduction steps.</p>
</li>
<li><p><strong>Prioritize Defects Effectively</strong>: Categorize defects based on their business impact and resolve critical defects first to prevent major disruptions.</p>
</li>
<li><p><strong>Encourage Cross-Team Collaboration</strong>: Foster communication between developers, testers, and business analysts to ensure defects are properly understood and resolved efficiently.</p>
</li>
<li><p><strong>Leverage Test Automation</strong>: Implement automated testing to identify defects early and reduce manual testing effort.</p>
</li>
<li><p><strong>Monitor Defect Trends</strong>: Use defect analytics to track trends, identify recurring issues, and improve software development practices.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Defect management is a crucial aspect of software testing that ensures the delivery of high-quality applications. Understanding the <strong>defect life cycle</strong> and adopting modern testing trends can significantly enhance defect resolution processes. By integrating Agile, DevOps, AI-driven analytics, and automated defect tracking, organizations can optimize software quality and reduce time-to-market. Efficient defect management ultimately leads to a seamless user experience and a more robust software product.</p>
]]></content:encoded></item><item><title><![CDATA[Kyber KEM: A Quantum-Resistant Lattice-Based Framework for Secure Key Encapsulation (Example in Golang)]]></title><description><![CDATA[Cryptography, the science of secure communication, has evolved to address new challenges in a world where traditional encryption techniques may fall short against quantum computers. Quantum computers, capable of processing vast amounts of information...]]></description><link>https://articles.eminmuhammadi.com/kyber-kem-a-quantum-resistant-lattice-based-framework-for-secure-key-encapsulation-example-in-golang</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/kyber-kem-a-quantum-resistant-lattice-based-framework-for-secure-key-encapsulation-example-in-golang</guid><category><![CDATA[kyber]]></category><category><![CDATA[pq-crystals]]></category><category><![CDATA[Lattice-Based Cryptography]]></category><category><![CDATA[elliptic curve cryptography]]></category><category><![CDATA[RSA]]></category><category><![CDATA[quantum computing]]></category><category><![CDATA[Quantum Cryptography]]></category><category><![CDATA[NIST]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Mon, 28 Oct 2024 19:18:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730142976280/35821ceb-c2d8-4490-86cb-0e6a85135a48.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Cryptography, the science of secure communication, has evolved to address new challenges in a world where traditional encryption techniques may fall short against quantum computers. Quantum computers, capable of processing vast amounts of information simultaneously, have the potential to break widely-used encryption algorithms, including <a target="_blank" href="https://dl.acm.org/doi/pdf/10.1145/359340.359342">RSA</a> and <a target="_blank" href="https://www.ams.org/journals/mcom/1987-48-177/S0025-5718-1987-0866109-5/S0025-5718-1987-0866109-5.pdf">ECC</a>, which secure much of today's digital communications. One promising approach to counter these threats is lattice-based cryptography, a field that relies on mathematical structures resistant to quantum attacks. Among the innovative advancements in this area is Kyber, a lattice-based key encapsulation mechanism (KEM) designed to maintain secure communication, even in the post-quantum era.</p>
<h3 id="heading-lattice-based-cryptography-an-overview"><strong>Lattice-Based Cryptography: An Overview</strong></h3>
<p>At the core of lattice-based cryptography is the concept of mathematical lattices, which can be visualized as a grid of points in multidimensional space. This type of cryptography draws its security from complex mathematical problems that are difficult to solve, even for quantum computers. Two of the primary problems supporting this security are the <strong>Learning With Errors (LWE)</strong> and <strong>Ring-Learning With Errors (RLWE)</strong> problems. The LWE problem involves finding a solution when a bit of noise or randomness is added to an otherwise solvable system, making it computationally challenging. RLWE builds on LWE by placing this problem in a structured ring setting, which improves efficiency while maintaining security. The difficulty of solving these lattice problems provides the basis for secure, quantum-resistant encryption in lattice-based systems.</p>
<p>In practical terms, lattice-based cryptographic systems encode messages as specific points within a lattice. When a message is encrypted, a small amount of noise is added, effectively hiding the original message within the lattice's structure. To retrieve the message, the recipient must have precise knowledge of the lattice structure used in the encoding. The approach provides both efficiency and strong security guarantees because attackers, even those with quantum computing capabilities, struggle to locate and interpret the hidden message within the noisy lattice.</p>
<h3 id="heading-key-encapsulation-mechanisms-kem"><strong>Key Encapsulation Mechanisms (KEM)</strong></h3>
<p>A Key Encapsulation Mechanism (KEM) is a protocol used in cryptography to securely transmit encryption keys. Rather than directly sharing encryption keys, which could be intercepted, KEM encapsulates a randomly generated encryption key within a secure cryptographic process. This encapsulated key is then used to encrypt and decrypt communications. In essence, KEM enables a secure key exchange between parties without risking exposure of the encryption key during transmission.</p>
<p><a target="_blank" href="https://pq-crystals.org/kyber/data/kyber-specification-round3-20210131.pdf">Kyber</a>, a lattice-based KEM, was designed with the unique challenges of the post-quantum era in mind. It uses the <strong>module lattice LWE (MLWE)</strong> problem, a variant of the standard LWE problem. This modification makes Kyber both more flexible and scalable, allowing it to secure communication channels at varying levels of security based on the application’s needs. The MLWE-based structure of Kyber enables it to encapsulate keys in such a way that security can be tailored while maintaining strong post-quantum resistance.</p>
<p>Kyber’s structure revolves around three essential processes—key generation, encapsulation, and decapsulation. Each phase contributes to Kyber’s ability to securely exchange encryption keys.</p>
<p>In the <strong>Key Generation</strong> phase, both a public key and a secret key are generated. The public key, which will be used for encryption, contains a matrix created from a seed, ensuring both randomness and consistency across keys. This matrix is central to the security of the system, as it is the reference point for decoding the lattice structure in later steps. Meanwhile, the secret key, stored securely by the recipient, is designed to facilitate decryption and is never exposed to other parties.</p>
<p>During <strong>Encapsulation</strong>, a random message or key is encrypted using the recipient's public key. This process produces a ciphertext, or encrypted message, which can be safely transmitted over public channels without risking the confidentiality of the original message. The encapsulated key, securely hidden within the ciphertext, is then used to perform symmetric encryption for efficient and secure communication.</p>
<p>In the <strong>Decapsulation</strong> step, the recipient uses their secret key to decrypt the ciphertext and retrieve the original encapsulated key. This key, once recovered, forms the foundation of a secure communication channel, as it allows both parties to decrypt messages exchanged using the same symmetric encryption key. The encapsulation and decapsulation processes make Kyber a highly secure KEM, even in cases where an attacker intercepts the ciphertext, as decrypting the message without the secret key remains virtually impossible.</p>
<h3 id="heading-an-example-of-kyber-in-action"><strong>An Example of Kyber in Action</strong></h3>
<p>Imagine a scenario where Alice and Bob, two parties who want to communicate securely, decide to use Kyber KEM. Bob begins by generating a public and secret key pair through Kyber’s key generation process. He then shares his public key with Alice, while securely storing the secret key for later use.</p>
<p>Alice, who wishes to send a confidential message to Bob, generates a random encryption key on her end. She then uses Bob's public key to encapsulate this random encryption key, producing ciphertext in the process. This ciphertext essentially hides the random encryption key within a secure cryptographic structure and is sent to Bob over a potentially insecure channel.</p>
<p>When Bob receives Alice's ciphertext, he uses his secret key to decapsulate it, retrieving the random encryption key that Alice initially generated. This shared key now allows Bob to decrypt any subsequent messages from Alice securely, ensuring both confidentiality and authenticity in their communication.</p>
<p>Let's consider a scenario in which Alice and Bob use Kyber for key encapsulation to securely exchange a 256-bit AES-GCM key. Here’s the full Go code for this configuration:</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"crypto/aes"</span>
    <span class="hljs-string">"crypto/cipher"</span>
    <span class="hljs-string">"crypto/rand"</span>
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"log"</span>

    <span class="hljs-string">"github.com/cloudflare/circl/kem/schemes"</span>
)

<span class="hljs-keyword">var</span> kyber = schemes.ByName(<span class="hljs-string">"Kyber512"</span>) <span class="hljs-comment">// Using Kyber-512 KEM</span>

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Step 1: Bob generates a Kyber-512 public/private key pair</span>
    bobPubK, bobPrivK, err := kyber.GenerateKeyPair()
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Error generating key pair: %v"</span>, err)
    }
    fmt.Println(<span class="hljs-string">"Bob has generated his public and private keys."</span>)

    <span class="hljs-comment">// Step 2: Alice encapsulates a shared secret (AES-256 key) using Bob's public key</span>
    kemCiphertext, sharedSecretEncap, err := kyber.Encapsulate(bobPubK)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Error encapsulating the shared secret: %v"</span>, err)
    }
    fmt.Println(<span class="hljs-string">"Alice has encapsulated an AES-256-GCM key using Bob's public key."</span>)

    <span class="hljs-comment">// Step 3: Alice uses the encapsulated shared secret as the AES key for GCM encryption</span>
    block, err := aes.NewCipher(sharedSecretEncap[:<span class="hljs-number">32</span>]) <span class="hljs-comment">// Using first 32 bytes for AES-256</span>
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Error creating AES cipher: %v"</span>, err)
    }

    aesGCM, err := cipher.NewGCM(block)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Error creating AES-GCM mode: %v"</span>, err)
    }

    <span class="hljs-comment">// Generate a nonce for AES-GCM encryption</span>
    nonce := <span class="hljs-built_in">make</span>([]<span class="hljs-keyword">byte</span>, aesGCM.NonceSize())
    <span class="hljs-keyword">if</span> _, err := rand.Read(nonce); err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Error generating nonce: %v"</span>, err)
    }

    <span class="hljs-comment">// Step 4: Alice encrypts her confidential message with AES-GCM</span>
    message := []<span class="hljs-keyword">byte</span>(<span class="hljs-string">"This is a confidential message from Alice to Bob."</span>)
    aesCiphertext := aesGCM.Seal(<span class="hljs-literal">nil</span>, nonce, message, <span class="hljs-literal">nil</span>)
    fmt.Printf(<span class="hljs-string">"Alice has encrypted her message with the shared secret.\n"</span>)

    <span class="hljs-comment">// Alice sends Bob the ciphertext (Kyber encapsulated key), nonce, and AES-GCM-encrypted message</span>
    fmt.Printf(<span class="hljs-string">"Ciphertext (Encapsulated Key): %x\n"</span>, kemCiphertext)
    fmt.Printf(<span class="hljs-string">"Nonce: %x\n"</span>, nonce)
    fmt.Printf(<span class="hljs-string">"Encrypted Message: %x\n"</span>, aesCiphertext)

    <span class="hljs-comment">// Step 5: Bob receives Alice's ciphertext and decrypts it using his Kyber private key</span>
    sharedSecretDecap, err := kyber.Decapsulate(bobPrivK, kemCiphertext)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Error decapsulating the shared secret: %v"</span>, err)
    }

    <span class="hljs-comment">// Step 6: Bob uses the decapsulated shared secret to decrypt the AES-GCM-encrypted message</span>
    block, err = aes.NewCipher(sharedSecretDecap[:<span class="hljs-number">32</span>]) <span class="hljs-comment">// Using first 32 bytes for AES-256</span>
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Error creating AES cipher for decryption: %v"</span>, err)
    }

    aesGCM, err = cipher.NewGCM(block)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Error creating AES-GCM mode for decryption: %v"</span>, err)
    }

    <span class="hljs-comment">// Bob decrypts the message</span>
    plaintext, err := aesGCM.Open(<span class="hljs-literal">nil</span>, nonce, aesCiphertext, <span class="hljs-literal">nil</span>)
    <span class="hljs-keyword">if</span> err != <span class="hljs-literal">nil</span> {
        log.Fatalf(<span class="hljs-string">"Error decrypting message: %v"</span>, err)
    }

    fmt.Printf(<span class="hljs-string">"Bob has decrypted the message: %s\n"</span>, plaintext)

    <span class="hljs-comment">// Verification</span>
    <span class="hljs-keyword">if</span> <span class="hljs-keyword">string</span>(message) == <span class="hljs-keyword">string</span>(plaintext) {
        fmt.Println(<span class="hljs-string">"Message successfully encrypted and decrypted using Kyber-encapsulated AES-256-GCM key!"</span>)
    } <span class="hljs-keyword">else</span> {
        fmt.Println(<span class="hljs-string">"Error: Decrypted message does not match original."</span>)
    }
}
</code></pre>
<p>Explanation:</p>
<ol>
<li><p><strong>Key Generation</strong>: Bob generates a Kyber-512 key pair, which includes a public and private key.</p>
</li>
<li><p><strong>Encapsulation</strong>: Alice uses Bob’s public key to encapsulate a shared secret, which will act as her AES-256 key. She obtains a <code>kemCiphertext</code> (Kyber encapsulated key) and a <code>sharedSecretEncap</code>.</p>
</li>
<li><p><strong>AES-GCM Encryption</strong>: Alice encrypts her message using AES-GCM, with the first 32 bytes of <code>sharedSecretEncap</code> as the AES key.</p>
</li>
<li><p><strong>Transmission</strong>: Alice sends the <code>kemCiphertext</code>, the AES-GCM <code>nonce</code>, and the AES-encrypted message <code>aesCiphertext</code> to Bob.</p>
</li>
<li><p><strong>Decapsulation and Decryption</strong>: Bob decapsulates the <code>kemCiphertext</code> to retrieve <code>sharedSecretDecap</code>, which should match Alice’s <code>sharedSecretEncap</code>. He then uses the first 32 bytes of this shared secret as his AES-256 key to decrypt the message <code>aesCiphertext</code>.</p>
</li>
</ol>
<p>This setup demonstrates a quantum-resistant key encapsulation (Kyber-512) combined with AES-GCM encryption, suitable for secure post-quantum communication.</p>
<h3 id="heading-advantages-of-using-kyber"><strong>Advantages of Using Kyber</strong></h3>
<p>Kyber presents several significant advantages, which make it suitable for addressing the modern encryption challenges presented by quantum computing.</p>
<p>Kyber’s quantum resistance is grounded in the MLWE problem, a mathematically challenging structure that even quantum computers cannot easily break. This sets Kyber apart from traditional cryptographic systems, which rely on integer factorization or discrete logarithms, both vulnerable to quantum attacks.</p>
<p>Another key advantage is Kyber’s efficiency. The use of module lattices allows Kyber to offer robust security without imposing heavy computational demands, making it viable for both high-security and resource-constrained environments. In comparison to other post-quantum cryptosystems, Kyber requires relatively low storage and computational resources, which helps optimize its performance.</p>
<p>Kyber also offers scalability. It is configurable to provide varying levels of security, tailored to different applications and risk profiles. For example, Kyber provides three different security levels (Kyber512, Kyber768, and Kyber1024) each offering incremental security guarantees. This flexibility enables Kyber to cater to a broad range of applications, from secure online transactions to high-stakes government communications.</p>
<h3 id="heading-practical-applications-of-kyber"><strong>Practical Applications of Kyber</strong></h3>
<p>Kyber KEM’s design and capabilities make it particularly suitable for applications where secure key exchange is critical. In online communications, Kyber KEM could enhance the security of HTTPS protocols, providing post-quantum-safe encryption for browsing sessions and online transactions. For secure messaging applications, Kyber’s use for key exchange ensures that conversations are protected against potential quantum attacks, preserving user privacy in a future where quantum computers are more accessible.</p>
<p>In the realm of the Internet of Things (IoT), Kyber’s lightweight nature makes it highly effective. IoT devices often have limited processing power and memory, but with Kyber’s efficiency and low computational requirements, these devices can implement secure encryption to protect data transmission without sacrificing performance or battery life.</p>
<h3 id="heading-conclusion"><strong>Conclusion</strong></h3>
<p>Kyber represents a significant advancement in cryptography, reflecting the growing need for quantum-resistant encryption as quantum technology progresses. Its foundation in lattice-based cryptography offers robustness and resilience against quantum attacks, ensuring that even advanced computing capabilities cannot easily breach encrypted communications. By adapting the key encapsulation mechanism to fit diverse security needs and resource constraints, Kyber stands out as a promising solution for future-proofing digital security across industries and applications.</p>
<hr />
<p>References:</p>
<p>Rivest, R.L., Shamir, A. and Adleman, L., 1978. A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21(2), pp.120-126.</p>
<p>Koblitz, N. (1987). Elliptic curve cryptosystems. Mathematics of Computation, 48(177), 203-209.</p>
<p>Avanzi, R., Bos, J., Ducas, L., Kiltz, E., Lepoint, T., Lyubashevsky, V., Schanck, J.M., Schwabe, P., Seiler, G. and Stehlé, D., 2019. CRYSTALS-Kyber algorithm specifications and supporting documentation. NIST PQC Round, 2(4), pp.1-43.</p>
]]></content:encoded></item><item><title><![CDATA[Create Your Free Self-Hosted Telegram AI Chatbot with n8n and Ollama]]></title><description><![CDATA[In an increasingly automated world, the synergy of artificial intelligence (AI) and automation tools has unlocked new possibilities for businesses and developers. Ollama's LLaMA 3.1 model and n8n workflows are two such tools that, when combined, offe...]]></description><link>https://articles.eminmuhammadi.com/create-your-free-self-hosted-telegram-ai-chatbot-with-n8n-and-ollama</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/create-your-free-self-hosted-telegram-ai-chatbot-with-n8n-and-ollama</guid><category><![CDATA[AI]]></category><category><![CDATA[telegram bot]]></category><category><![CDATA[telegram]]></category><category><![CDATA[ollama]]></category><category><![CDATA[LLaMa]]></category><category><![CDATA[n8n]]></category><category><![CDATA[Workflow Automation]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Wed, 21 Aug 2024 20:03:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724270977476/0dc5dc1d-ca71-42c9-a5de-a412bbc4f851.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In an increasingly automated world, the synergy of artificial intelligence (AI) and automation tools has unlocked new possibilities for businesses and developers. <a target="_blank" href="https://ollama.com/library/llama3.1">Ollama's LLaMA 3.1</a> model and n8n workflows are two such tools that, when combined, offer a powerful and flexible solution for creating AI-driven chat workflows. This guide will walk you through the process of integrating the <a target="_blank" href="https://llama.meta.com/">LLaMA 3.1</a> model with <a target="_blank" href="https://docs.n8n.io/hosting/">n8n</a> to create dynamic AI chatbots and workflows that can automate tasks, respond to user queries, and provide valuable insights.</p>
<p>An example of a chat system that is fully self-hosted and freely accessible, without the need for an API key.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724268511099/8e381148-6e9b-462b-832a-9d02ef22c6b5.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-overview-of-the-llama-31-model"><strong>Overview of the LLaMA 3.1 Model</strong></h3>
<p>Ollama’s LLaMA (Language Learning and Model Architecture) 3.1 is a state-of-the-art language model designed to facilitate natural language understanding and generation tasks. Developed by Ollama, this model leverages deep learning techniques to offer advanced AI capabilities that are particularly well-suited for creating conversational agents, automated text generation, and other language-centric applications.</p>
<p>The LLaMA 3.1 model is known for its ability to understand context, generate coherent and contextually appropriate responses, and handle a wide range of conversational topics. This makes it an ideal choice for developers looking to create intelligent chatbots and other AI-driven communication tools.</p>
<p><strong>Key Features and Capabilities</strong></p>
<ul>
<li><p><strong>Contextual Understanding:</strong> The LLaMA 3.1 model excels at understanding the nuances of human language, allowing it to generate more accurate and relevant responses.</p>
</li>
<li><p><strong>Scalability:</strong> Whether you’re developing a simple chatbot or a complex conversational system, LLaMA 3.1 can scale to meet the demands of your application.</p>
</li>
<li><p><strong>Customizability:</strong> Developers can fine-tune the model for specific use cases, ensuring that it aligns with the desired tone, style, and subject matter.</p>
</li>
<li><p><strong>Efficiency:</strong> Despite its advanced capabilities, LLaMA 3.1 is optimized for efficiency, making it suitable for real-time applications.</p>
</li>
</ul>
<h3 id="heading-n8n-workflows">n8n Workflows</h3>
<p>n8n is an open-source workflow automation tool that allows users to design, execute, and manage complex workflows using a visual interface. It supports a wide range of integrations and can automate processes that involve multiple steps, making it a powerful tool for developers and businesses looking to streamline their operations.</p>
<p>Using n8n for workflow automation offers several key advantages. Its modular design provides flexibility, allowing users to create custom workflows that can be tailored to specific needs by integrating various services and APIs. The visual workflow builder enhances ease of use, making it accessible for users, even those with limited coding experience, to design and implement automation.</p>
<p>Additionally, n8n's open-source nature allows for extensibility, enabling the addition of new nodes and integrations to create highly customized solutions. The active community surrounding n8n further supports its development, offering resources, plugins, and ongoing assistance to users.</p>
<h3 id="heading-creating-a-simple-ai-chat-workflow"><strong>Creating a Simple AI Chat Workflow</strong></h3>
<p>When creating your AI chat workflow, keep in mind the following essential components:</p>
<p>Gather user input through a web form, chatbot interface, or API call. Next, send this input to the LLaMA 3.1 model for AI processing to generate a response. Finally, deliver the AI-generated response back to the user, either within the same interface or through another communication channel.</p>
<p>To configure triggers in n8n, set them to initiate workflows based on specific events. For example, a webhook trigger can be established to start the workflow when user input is received, while time-based triggers, like cron nodes, can be used for scheduled tasks such as sending reminders or follow-up messages.</p>
<p>When integrating the LLaMA 3.1 model for AI responses, use a user input node to capture the input via a webhook or API call node. Then, send this input to the LLaMA 3.1 API using an HTTP request node, ensuring the request includes the necessary parameters for generating a response. Lastly, handle the API’s response by processing the relevant text and formatting it for delivery.</p>
<h3 id="heading-a-detailed-guide-for-building-a-telegram-ai-bot">A detailed guide for building a Telegram AI bot</h3>
<p>Create a Telegram Trigger node to handle all incoming messages. To configure the API connection for Telegram, it is necessary to obtain an "Access Token." Comprehensive documentation can be accessed <a target="_blank" href="https://core.telegram.org/bots/tutorial">here</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724266023307/bc5f738a-6bab-4a3b-830c-3ca1ae1c1490.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724265910314/da02869d-0780-4bd1-b06b-392818d2d7ad.png" alt class="image--center mx-auto" /></p>
<p>Next, add a Question and Answer Chain node to forward incoming messages to the LLaMA 3.1 model for generating AI responses to the provided queries.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724266617134/ba5af28a-e7a6-4186-b0f4-767ab5eb8bdc.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724266520705/c1134eca-615b-4c38-83b8-bf07a4937437.png" alt class="image--center mx-auto" /></p>
<p>I prefer to include an additional step to neutralize the AI response. To implement this, use sentiment analysis and additional AI agent nodes to process the response before delivering the final message.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724267998358/daf0da8e-ba2b-43f9-9efc-8bbe7dbfe194.png" alt class="image--center mx-auto" /></p>
<p>Before deploying this workflow in a production environment, it is necessary to normalize user queries. I recommend removing slashes ("/") from the messages. Add an "If" node and ensure that the workflow continues from the false path.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724268119008/d205ed6b-b03f-42de-ac7f-0d230a2c03ae.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724268277631/87167b0c-5383-40a7-8ffc-41a7cea54dae.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-testing">Testing</h3>
<p>In developing AI chat workflows, you may encounter several common issues. One such issue is API rate limits, which necessitate that your workflow handle these limits effectively, potentially through implementing retry or backoff strategies. Another common problem is timeouts; it's important to configure timeout settings appropriately, especially for long-running API calls, to avoid workflow failures.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724268410835/623240ee-c20a-4747-b585-625b32ed4f23.png" alt class="image--center mx-auto" /></p>
<p>n8n offers various tools for debugging workflows. You can utilize n8n’s execution history to review previous workflow runs and identify where errors have occurred. Additionally, running workflows in debug mode allows you to access detailed logs and inspect data at each step.</p>
<h3 id="heading-conclusion">Conclusion</h3>
<p>As automation and artificial intelligence (AI) continue to advance, the integration of these technologies offers transformative possibilities for businesses and developers. By combining Ollama's LLaMA 3.1 model with n8n workflows, you can harness a powerful and flexible solution to develop sophisticated AI-driven chatbots and workflows. This guide has outlined the process of integrating LLaMA 3.1 with n8n to create dynamic, responsive chat systems that automate tasks, address user queries, and provide valuable insights.</p>
<p>The LLaMA 3.1 model stands out for its advanced language capabilities, including contextual understanding, scalability, and customizability, making it a prime choice for developers seeking to create intelligent conversational agents. Coupled with n8n's robust workflow automation feature - such as its modular design, visual interface, and extensive integration options - this combination allows for the efficient and effective development of AI chat solutions.</p>
<p>In building a Telegram AI bot, this guide has covered the essential steps, from configuring triggers and handling user input to integrating the LLaMA 3.1 model and normalizing user queries. Additionally, addressing common issues such as API rate limits and timeouts, and utilizing n8n’s debugging tools, ensures a smooth and reliable deployment process.</p>
<p>By following these guidelines, you can leverage the capabilities of LLaMA 3.1 and n8n to build and deploy a robust, self-hosted AI chatbot that enhances user interaction and streamlines communication processes.</p>
]]></content:encoded></item><item><title><![CDATA[TestOps: Connecting Testing and Operations Efficiently]]></title><description><![CDATA[In software development, TestOps has become important for connecting testing and operations. As development speeds up and the need for continuous delivery grows, it's essential to include testing in the operational side of software delivery. This art...]]></description><link>https://articles.eminmuhammadi.com/testops-connecting-testing-and-operations-efficiently</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/testops-connecting-testing-and-operations-efficiently</guid><category><![CDATA[Testing]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[testops]]></category><category><![CDATA[Devops]]></category><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Tue, 30 Jul 2024 17:17:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1722359236225/dbdb6f0e-aa57-4fbf-9a76-22b408cc0127.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In software development, <a target="_blank" href="https://en.wikipedia.org/wiki/TestOps">TestOps</a> has become important for connecting testing and operations. As development speeds up and the need for continuous delivery grows, it's essential to include testing in the operational side of software delivery. This article looks at what TestOps is, why it matters, its main parts, and best practices for using it.</p>
<h2 id="heading-what-is-testops">What is TestOps?</h2>
<p>TestOps, short for Testing Operations, is a methodology that integrates testing processes with operational activities to ensure the quality and reliability of software in continuous delivery pipelines. It involves the collaboration of development, testing, and operations teams to streamline testing activities, automate processes, and enhance the overall efficiency of software delivery.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1722359525868/749d2ad2-ec2e-424c-8113-51ab7f2d3f23.png" alt class="image--center mx-auto" /></p>
<p>The concept of TestOps has evolved from the traditional roles of quality assurance (QA) and operations. In the past, QA was often seen as a separate phase that occurred after development, leading to delays and inefficiencies. With the advent of Agile and DevOps methodologies, the need for continuous testing throughout the development lifecycle became evident. TestOps emerged as a response to this need, emphasizing the integration of testing into the continuous integration and continuous delivery (CI/CD) pipeline.</p>
<h2 id="heading-why-is-testops-important">Why is TestOps Important?</h2>
<p>TestOps is crucial in modern software development for several reasons. It makes sure testing happens continuously during the development process, which helps find defects early and lowers the risk of releasing faulty software. By automating repetitive and time-consuming testing tasks, TestOps increases the speed and efficiency of the testing process, freeing up resources for more complex and creative testing activities. It encourages teamwork between development, testing, and operations teams, breaking down barriers and promoting a culture where everyone shares responsibility for software quality. As software projects grow in complexity and scale, TestOps provides the framework to manage and coordinate testing activities across multiple teams and environments. Finally, by integrating testing into the CI/CD pipeline, TestOps ensures that quality checks are an integral part of the development process, leading to higher-quality software releases.</p>
<h2 id="heading-key-components-of-testops">Key Components of TestOps</h2>
<p>Implementing TestOps involves several key components that work together to create a seamless testing and operational workflow.</p>
<h3 id="heading-test-automation">Test Automation</h3>
<p>Automation is at the core of TestOps. Automated tests are integrated into the CI/CD pipeline, allowing for continuous testing of code changes. This includes unit tests, integration tests, functional tests, and performance tests. Automation tools and frameworks, such as Selenium, JUnit, and Jenkins, play a critical role in enabling TestOps.</p>
<h3 id="heading-continuous-integration-and-continuous-delivery-cicd">Continuous Integration and Continuous Delivery (CI/CD)</h3>
<p>CI/CD pipelines are the backbone of TestOps. Continuous integration ensures that code changes are automatically tested and integrated into the main codebase, while continuous delivery automates the deployment of tested code to production environments. TestOps ensures that testing is seamlessly integrated into these pipelines, providing real-time feedback on code quality.</p>
<h3 id="heading-monitoring-and-observability">Monitoring and Observability</h3>
<p>TestOps involves monitoring and observing the behavior of software in production environments. This includes tracking performance metrics, error rates, and user feedback. By integrating monitoring tools like Prometheus, Grafana, and New Relic, TestOps provides insights into the health and performance of software, allowing teams to identify and address issues proactively.</p>
<h3 id="heading-infrastructure-as-code-iac">Infrastructure as Code (IaC)</h3>
<p>Infrastructure as Code (IaC) is a practice that involves managing and provisioning infrastructure through code. TestOps leverages IaC to create consistent and reproducible testing environments, ensuring that tests are executed in environments that closely resemble production. Tools like Terraform and Ansible are commonly used for IaC.</p>
<h3 id="heading-collaboration-and-communication">Collaboration and Communication</h3>
<p>Effective collaboration and communication are essential for TestOps. Development, testing, and operations teams need to work together seamlessly to ensure that testing activities are aligned with development goals and operational requirements. Collaboration tools like Slack, Jira, and Confluence facilitate communication and coordination among teams.</p>
<h2 id="heading-best-practices-for-implementing-testops">Best Practices for Implementing TestOps</h2>
<p>Implementing TestOps requires careful planning and execution. To begin with, it is crucial to establish a solid foundation of automated tests and CI/CD pipelines. Ensure that tests are reliable, maintainable, and provide comprehensive coverage of critical functionalities. Promoting a culture of collaboration and shared responsibility for quality is essential. Encouraging teams to work together, share knowledge, and continuously improve testing processes can significantly enhance the implementation of TestOps. Investing in the right tools and technologies to support TestOps is another key factor.</p>
<p>Choosing automation frameworks, CI/CD tools, monitoring solutions, and collaboration platforms that align with your organization's needs and goals can streamline the process. Continuous evaluation and improvement of testing and operational workflows are imperative. Gathering feedback, analyzing metrics, and identifying areas for optimization should be a constant effort. Lastly, integrating security testing into your TestOps practices is vital. Ensuring that security tests are part of your automated test suite and that vulnerabilities are identified and addressed early in the development process can safeguard your software.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>TestOps represents a significant shift in the way testing and operations are conducted in modern software development. By integrating testing into the operational workflow and automating key processes, TestOps ensures continuous quality and reliability throughout the software delivery lifecycle.</p>
<p>As organizations strive to meet the demands of rapid development and continuous delivery, adopting TestOps practices becomes essential for maintaining high standards of software quality and achieving operational excellence. By fostering collaboration, leveraging automation, and emphasizing continuous improvement, TestOps bridges the gap between testing and operations, paving the way for more efficient and effective software development.</p>
]]></content:encoded></item><item><title><![CDATA[Web 3.0 for Dummies: Understanding the Next Evolution of the Internet]]></title><description><![CDATA[Web 3.0, often heralded as the next evolution of the internet, is a term that’s gaining traction in tech circles, but what does it really mean? To understand Web 3.0, it's useful to first look back at the previous stages of the web: Web 1.0 and Web 2...]]></description><link>https://articles.eminmuhammadi.com/web-30-for-dummies-understanding-the-next-evolution-of-the-internet</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/web-30-for-dummies-understanding-the-next-evolution-of-the-internet</guid><category><![CDATA[Ubiquitous Connectivity]]></category><category><![CDATA[AI]]></category><category><![CDATA[crypto]]></category><category><![CDATA[Web3]]></category><category><![CDATA[internet]]></category><category><![CDATA[decentralization]]></category><category><![CDATA[Semantic Web]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Wed, 29 May 2024 19:47:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717011825972/88cb3b80-02d6-4293-befd-accd5463aa81.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Web 3.0, often heralded as the next evolution of the internet, is a term that’s gaining traction in tech circles, but what does it really mean? To understand Web 3.0, it's useful to first look back at the previous stages of the web: Web 1.0 and Web 2.0.</p>
<p>Web 1.0, which spanned from the early 1990s to the early 2000s, was the era of static websites. Think of it as a digital library where information was read-only. Websites were simple, mostly text-based, and there was little user interaction. It was like a massive, interconnected encyclopaedia that people could browse for information, but they couldn’t contribute to it or interact with other users.</p>
<p>Then came Web 2.0, starting in the mid-2000s. This era introduced dynamic and interactive content, which allowed users to not only consume but also create and share information. Social media platforms, blogs, wikis, and video-sharing sites emerged, enabling users to interact, collaborate, and share content. Web 2.0 transformed the internet into a participatory, social, and interactive space. Companies like Facebook, YouTube, and Twitter flourished, emphasizing user-generated content, social networking, and real-time communication.</p>
<p>Now, Web 3.0 aims to take this a step further by making the internet more intelligent, secure, and decentralized. It’s a vision of a new web that’s driven by several key principles:</p>
<ol>
<li><p><strong>Decentralization</strong>: Unlike Web 2.0, where data is often stored on centralized servers owned by big corporations, Web 3.0 promotes decentralization. This means data is distributed across multiple locations and controlled by individuals rather than a single entity. Blockchain technology plays a crucial role here, enabling secure and transparent transactions without the need for intermediaries. This decentralization can help reduce the control that large tech companies have over personal data and increase user privacy.</p>
</li>
<li><p><strong>Semantic Web</strong>: Web 3.0 is also called the Semantic Web. It aims to make data more understandable by machines. In Web 3.0, computers can interpret and process data in a way that’s closer to human reasoning, which allows for better search results, recommendations, and interactions. This is achieved through technologies like artificial intelligence (AI) and machine learning, which enable systems to understand the context and meaning of the data they process.</p>
</li>
<li><p><strong>Artificial Intelligence</strong>: AI is a cornerstone of Web 3.0, making the internet smarter and more responsive to user needs. AI can help filter and analyse vast amounts of data, providing personalized experiences and more relevant information. For example, AI can enhance search engines by understanding the intent behind a query rather than just matching keywords, leading to more accurate and useful results.</p>
</li>
<li><p><strong>Ubiquitous Connectivity</strong>: Web 3.0 envisions an internet where devices are seamlessly connected and can interact with each other. This includes not just computers and smartphones, but also the growing number of Internet of Things (IoT) devices, such as smart home appliances, wearable technology, and connected cars. This pervasive connectivity aims to create a more integrated and intuitive user experience, where devices and services work together seamlessly.</p>
</li>
</ol>
<p>One of the practical implications of Web 3.0 is the rise of decentralized applications, or dApps. These are applications that run on blockchain networks, offering the benefits of transparency, security, and user control. Unlike traditional apps that are controlled by a central authority, dApps operate on peer-to-peer networks, ensuring that no single entity has control over the entire system. This can lead to more democratic and fair digital ecosystems.</p>
<p>For instance, in the financial sector, decentralized finance (DeFi) platforms are emerging as alternatives to traditional banking and financial services. DeFi uses blockchain technology to offer services like lending, borrowing, and trading without the need for intermediaries like banks. This not only reduces costs but also increases access to financial services for people around the world.</p>
<p>Another significant development in Web 3.0 is the concept of digital identity. Currently, our online identities are often fragmented and controlled by different platforms. Web 3.0 aims to give individuals control over their digital identities through decentralized identity solutions. This means users can have a single, secure digital identity that they control and can use across various platforms, reducing the risk of identity theft and data breaches.</p>
<p>Moreover, Web 3.0 promises to improve content creation and ownership. Through technologies like non-fungible tokens (NFTs), creators can tokenize their work, proving ownership and authenticity. This can revolutionize industries like art, music, and gaming by providing creators with new ways to monetize their work and ensuring that they retain control over their creations.</p>
<p>In summary, Web 3.0 represents a paradigm shift in how we interact with the internet. By leveraging decentralization, the Semantic Web, artificial intelligence, and ubiquitous connectivity, it aims to create a more intelligent, secure, and user-centric online experience. While still in its early stages, the potential of Web 3.0 to transform various aspects of our digital lives is immense, promising a future where the internet is more open, fair, and aligned with the interests of its users. As these technologies continue to evolve, we can expect to see more innovative applications and services that redefine our online interactions and digital economy.</p>
]]></content:encoded></item><item><title><![CDATA[The Future of Software Testing: How GPT-4o is Transforming the Testing Process]]></title><description><![CDATA[The introduction of AI, particularly models like ChatGPT-4o, has revolutionized the landscape of software quality assurance. As technology continues to evolve rapidly, the importance of software testing has never been greater. This blog post explores...]]></description><link>https://articles.eminmuhammadi.com/the-future-of-software-testing-how-gpt-4o-is-transforming-the-testing-process</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/the-future-of-software-testing-how-gpt-4o-is-transforming-the-testing-process</guid><category><![CDATA[GPT-4o]]></category><category><![CDATA[chatgpt]]></category><category><![CDATA[AI]]></category><category><![CDATA[software development]]></category><category><![CDATA[Testing]]></category><category><![CDATA[Software Testing]]></category><category><![CDATA[test-automation]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Sat, 18 May 2024 20:28:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1716063158905/503845c4-1856-43cf-9195-3713216facf8.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The introduction of AI, particularly models like <a target="_blank" href="https://openai.com/index/hello-gpt-4o/">ChatGPT-4o</a>, has revolutionized the landscape of software quality assurance. As technology continues to evolve rapidly, the importance of software testing has never been greater. This blog post explores how AI-driven solutions, such as ChatGPT-4o, are transforming software testing, ensuring robust, reliable, and efficient systems.</p>
<h2 id="heading-the-evolution-of-software-testing"><strong>The Evolution of Software Testing</strong></h2>
<p>Traditionally, software testing involved manual processes that were not only time-consuming but also prone to human error. The rise of automated testing tools addressed some of these challenges, offering faster and more consistent results. However, the complexity of modern software systems demands more sophisticated solutions, paving the way for AI-driven testing.</p>
<h2 id="heading-enter-chatgpt-4o"><strong>Enter ChatGPT-4o</strong></h2>
<p>ChatGPT-4o, developed by <a target="_blank" href="https://openai.com/about/">OpenAI</a>, represents a significant leap in AI capabilities. With its advanced natural language processing (NLP) and machine learning (ML) algorithms, ChatGPT-4o can understand and generate human-like text, making it an invaluable tool for software testing.</p>
<h4 id="heading-how-chatgpt-40-enhances-software-testing"><strong>How ChatGPT-4.0 Enhances Software Testing</strong></h4>
<ul>
<li><p><strong>Automated Test Case Generation:</strong> ChatGPT-4o can analyse software requirements and automatically generate comprehensive test cases. This not only saves time but also ensures that all possible scenarios are covered, reducing the risk of undetected bugs.</p>
</li>
<li><p><strong>Natural Language Processing for Test Scripts:</strong> By leveraging its NLP capabilities, ChatGPT-4o can translate natural language requirements into executable test scripts. This bridges the gap between non-technical stakeholders and the testing team, ensuring clear and precise test coverage.</p>
</li>
<li><p><strong>Enhanced Bug Detection:</strong> With its advanced pattern recognition, ChatGPT-4o can detect subtle bugs that might elude human testers. It can also predict potential problem areas based on historical data, allowing teams to proactively address issues before they escalate.</p>
</li>
<li><p><strong>Continuous Integration and Deployment (CI/CD):</strong> ChatGPT-4o seamlessly integrates with CI/CD pipelines, automating testing processes and ensuring that new code changes do not introduce regressions. This leads to more stable releases and a faster time-to-market.</p>
</li>
</ul>
<h2 id="heading-the-human-ai-collaboration"><strong>The Human-AI Collaboration</strong></h2>
<p>While AI-driven tools like ChatGPT-4o offer significant advantages, human expertise remains indispensable. AI can handle repetitive and data-intensive tasks, allowing testers to focus on more strategic and creative aspects of quality assurance. This synergy between human and AI capabilities leads to more thorough and efficient testing processes.</p>
<p>Let's start chatting with GPT. I have sent a screenshot of the LinkedIn sign-in page and asked AI to create test cases for the sign-in functionality in Gherkin style.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716063481632/c7ae0c9a-3bae-442c-affb-87d38fa40acc.png" alt="A screenshot of the LinkedIn sign-in page. The page includes fields for email or phone and password, a &quot;Forgot password?&quot; link, a &quot;Sign in&quot; button, and an option to sign in with Apple. Below the image is a prompt asking software testers to create test cases for the sign-in functionality in Gherkin style." class="image--center mx-auto" /></p>
<p>In a result, I got test suite covers the primary actions a user might take on the LinkedIn login page, ensuring that each functionality works as expected and provides appropriate feedback to the user.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1716063722029/d3b339d6-215b-45d5-8884-33eaa78e23bc.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-the-road-ahead"><strong>The Road Ahead</strong></h2>
<p>As AI continues to evolve, the future of software testing looks promising. ChatGPT-4o and similar AI models will play a pivotal role in creating smarter, more resilient software systems. Organizations that embrace these advancements will be better equipped to navigate the complexities of modern software development, ensuring high-quality products that meet the ever-growing demands of users.</p>
<p>In conclusion, the integration of AI in software testing, exemplified by ChatGPT-4o, marks a new era of innovation and efficiency. By automating routine tasks and enhancing human capabilities, AI is set to redefine the standards of software quality assurance, making the impossible possible in the world of technology.</p>
]]></content:encoded></item><item><title><![CDATA[Mastering the Art of Agile Testing: A Comprehensive Guide]]></title><description><![CDATA[Agile methodology has become synonymous with adaptability and efficiency. At the heart of Agile development lies Agile testing, a dynamic approach that ensures the quality and functionality of software products through iterative testing and collabora...]]></description><link>https://articles.eminmuhammadi.com/mastering-the-art-of-agile-testing-a-comprehensive-guide</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/mastering-the-art-of-agile-testing-a-comprehensive-guide</guid><category><![CDATA[Testing]]></category><category><![CDATA[QA]]></category><category><![CDATA[software development]]></category><category><![CDATA[agile]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Developer]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Sun, 12 May 2024 19:49:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715543005920/988ef83e-d999-421c-a96a-efe2e718df8c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Agile methodology has become synonymous with adaptability and efficiency. At the heart of Agile development lies Agile testing, a dynamic approach that ensures the quality and functionality of software products through iterative testing and collaboration. In this guide, we'll delve deep into the nuances of Agile testing, exploring its pros and cons with real-world examples to help you navigate this essential aspect of modern software development.</p>
<h2 id="heading-understanding-agile-testing"><strong>Understanding Agile Testing</strong></h2>
<p>Agile testing is more than just a phase in the software development lifecycle; it's a mindset that permeates every aspect of the process. Unlike traditional waterfall methodologies, where testing occurs at the end of development, Agile testing is integrated throughout, allowing for continuous evaluation and refinement. By embracing change, collaboration, and feedback, Agile testing enables teams to deliver high-quality software that meets the evolving needs of users and stakeholders.</p>
<h2 id="heading-pros-of-agile-testing"><strong>Pros of Agile Testing</strong></h2>
<p>Agile methodology has revolutionized the way teams approach project management and product delivery. At the heart of Agile development lies Agile testing, a dynamic process that emphasizes collaboration, adaptability, and continuous improvement. In this section, we'll explore the numerous benefits of Agile testing and how it contributes to the success of software development projects.</p>
<p>Agile testing is an iterative approach to software testing that aligns with the principles of Agile development. Unlike traditional testing methodologies, which occur in a separate phase at the end of the development cycle, Agile testing is integrated throughout the process, ensuring early and continuous evaluation of the software's functionality and quality.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715543287651/cf7c1e34-82ea-472c-b805-972dcab1afb2.png" alt="Pros of Agile Testing" class="image--center mx-auto" /></p>
<h3 id="heading-increased-flexibility"><strong>Increased Flexibility</strong></h3>
<p>One of the primary advantages of Agile testing is its inherent flexibility. Agile methodologies allow teams to adapt to changing requirements and market conditions quickly. By embracing change and iteration, Agile testing ensures that testing efforts remain aligned with evolving project goals and stakeholder expectations.</p>
<h3 id="heading-quick-response-to-feedback"><strong>Quick Response to Feedback</strong></h3>
<p>Agile testing facilitates rapid feedback loops, enabling teams to gather feedback early and often from stakeholders, users, and team members. This feedback is incorporated into subsequent iterations, allowing for continuous improvement and refinement of the product.</p>
<h3 id="heading-faster-time-to-market"><strong>Faster Time-to-Market</strong></h3>
<p>Agile testing contributes to faster time-to-market by shortening development cycles and enabling continuous delivery of working software. By breaking down complex projects into smaller, manageable chunks, Agile teams can release incremental updates and features more frequently, providing value to customers sooner.</p>
<h3 id="heading-improved-collaboration">Improved Collaboration</h3>
<p>Agile testing fosters collaboration and communication among team members, leading to enhanced synergy and alignment. By involving testers, developers, product owners, and other stakeholders throughout the development process, Agile teams can leverage diverse perspectives and expertise to deliver better results.</p>
<h3 id="heading-higher-quality"><strong>Higher Quality</strong></h3>
<p>Agile testing promotes a culture of quality by emphasizing early and frequent testing, continuous integration, and automated testing. By identifying and addressing issues early in the development process, Agile teams can mitigate risks, improve code quality, and deliver a more reliable product to customers.</p>
<h3 id="heading-cost-effectiveness"><strong>Cost-Effectiveness</strong></h3>
<p>Agile testing contributes to cost-effectiveness by enabling early bug detection and prevention. By identifying and resolving issues early in the development process, Agile teams can avoid costly rework and maintenance expenses associated with fixing defects in later stages.</p>
<h3 id="heading-enhanced-customer-satisfaction"><strong>Enhanced Customer Satisfaction</strong></h3>
<p>Ultimately, Agile testing leads to enhanced customer satisfaction by involving stakeholders in the development process, delivering features that meet their needs, and providing regular updates and opportunities for feedback. By prioritizing customer value and responsiveness, Agile teams can build products that delight and retain users.</p>
<hr />
<p>Agile testing offers numerous benefits for software development projects, including increased flexibility, faster time-to-market, improved collaboration, higher quality, cost-effectiveness, and enhanced customer satisfaction. By embracing Agile principles and practices, teams can streamline their testing efforts, accelerate product delivery, and achieve greater success in today's competitive marketplace.</p>
<h2 id="heading-cons-of-agile-testing"><strong>Cons of Agile Testing</strong></h2>
<p>However, despite its numerous benefits, Agile testing also comes with its own set of challenges and limitations. In this section, we'll explore some of the cons associated with Agile testing and how they can impact the testing process and project outcomes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715543317438/a2f702b0-302a-48cf-8cae-1adcdd5869a5.png" alt="Cons of Agile Testing" class="image--center mx-auto" /></p>
<h3 id="heading-lack-of-documentation"><strong>Lack of Documentation</strong></h3>
<p>One of the challenges of Agile testing is the lack of comprehensive documentation. While Agile methodology emphasizes working software over exhaustive documentation, this can pose problems for teams accustomed to detailed test plans and reports. Without adequate documentation, it can be challenging to track testing progress, identify potential issues, and ensure regulatory compliance.</p>
<h3 id="heading-difficulty-in-maintaining-documentation"><strong>Difficulty in Maintaining Documentation</strong></h3>
<p>Even when documentation is created in Agile projects, maintaining it can be a struggle. With frequent changes and iterations, keeping documentation up-to-date becomes a daunting task. This can lead to discrepancies between actual testing activities and documented processes, resulting in confusion and inefficiencies within the team.</p>
<h3 id="heading-challenges-in-knowledge-transfer"><strong>Challenges in Knowledge Transfer</strong></h3>
<p>Agile testing relies heavily on collaboration and knowledge sharing among team members. However, in fast-paced Agile environments, knowledge transfer can be hindered by time constraints and shifting priorities. This can result in gaps in understanding and inconsistency in testing approaches, impacting the overall quality of the product.</p>
<h3 id="heading-time-constraints"><strong>Time Constraints</strong></h3>
<p>Agile projects are characterized by short development cycles, known as iterations or sprints. While this rapid pace allows for quick feedback and adaptation, it also puts pressure on testing teams to deliver within tight timeframes. Limited time for testing can lead to rushed testing efforts, overlooked issues, and compromised quality.</p>
<h3 id="heading-scope-creep"><strong>Scope Creep</strong></h3>
<p>Scope creep refers to the tendency for project requirements to expand beyond initial expectations, often without proper evaluation of their impact on testing efforts. In Agile projects, where requirements are subject to change, scope creep can derail testing priorities and strain resources, leading to delays and quality issues.</p>
<h3 id="heading-communication-challenges"><strong>Communication Challenges</strong></h3>
<p>Effective communication is essential for successful Agile testing, but it can be challenging to maintain clear and open communication channels in distributed or fast-paced environments. Communication breakdowns between team members, stakeholders, and external partners can lead to misunderstandings, delays, and misaligned expectations.</p>
<h3 id="heading-limited-predictability"><strong>Limited Predictability</strong></h3>
<p>Agile projects prioritize adaptability and responsiveness, which can sometimes come at the cost of predictability. Difficulty in estimating project timelines and testing outcomes can lead to uncertainty and ambiguity, making it challenging to plan and prioritize testing activities effectively.</p>
<hr />
<p>While Agile testing offers numerous benefits in terms of flexibility, collaboration, and responsiveness, it also presents challenges such as lack of documentation, time constraints, scope creep, communication challenges, and limited predictability. By acknowledging these cons and implementing strategies to mitigate their impact, teams can optimize their Agile testing processes and achieve success in their software development endeavours.</p>
<h2 id="heading-conclusion"><strong>Conclusion</strong></h2>
<p>Agile testing offers numerous benefits for software engineering teams, including early defect detection, enhanced collaboration, flexibility, and continuous improvement. However, it also presents challenges such as resource intensiveness, scope creep, dependency on collaboration, and documentation concerns. By understanding these pros and cons and leveraging best practices, software engineering professionals can harness the power of Agile testing to deliver high-quality software that meets the needs of users and stakeholders in today's fast-paced digital landscape.</p>
]]></content:encoded></item><item><title><![CDATA[Diffie–Hellman key exchange (Example in Golang)]]></title><description><![CDATA[The Diffie-Hellman key exchange protocol, named after its inventors Whitfield Diffie and Martin Hellman, is a fundamental method in cryptography for securely exchanging cryptographic keys over a public channel. Published in 1976, it was one of the ea...]]></description><link>https://articles.eminmuhammadi.com/diffiehellman-key-exchange-example-in-golang</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/diffiehellman-key-exchange-example-in-golang</guid><category><![CDATA[Key exchange]]></category><category><![CDATA[shared secret]]></category><category><![CDATA[Cryptography]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[TLS]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Mon, 18 Mar 2024 19:36:06 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717354704357/3820005d-a560-4660-95f7-beab94bd4fb8.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The <a target="_blank" href="https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange">Diffie-Hellman key exchange protocol</a>, named after its inventors Whitfield Diffie and Martin Hellman, is a fundamental method in cryptography for securely exchanging cryptographic keys over a public channel. Published in 1976, it was one of the earliest practical implementations of public-key cryptography, introducing the concept of a private key and corresponding public key.</p>
<p>At its core, Diffie–Hellman enables two parties to generate a shared secret key over a public channel, which can then be used for secure communication. The algorithm relies on the computational complexity of the discrete logarithm problem for its security.</p>
<p>The Diffie-Hellman key exchange (DHKE) is a fundamental cryptographic protocol that enables two parties to establish a shared secret key over an insecure channel. This shared secret can then be used for encrypted communication. Here's a detailed cryptographic explanation of how the Diffie-Hellman key exchange works:</p>
<ul>
<li><p><strong>Parameter Selection</strong>: Before starting the key exchange, both parties agree on certain parameters:<br />  <strong>Modulus</strong><code>p</code>: A prime number used for modular arithmetic.<br />  <strong>Base</strong><code>g</code>: A primitive root modulo <code>p</code>, meaning <code>g</code> generates all possible residues when raised to different powers modulo <code>p</code>.</p>
</li>
<li><p><strong>Private Key Generation</strong>: Each party generates its own private key:<br />  Alice chooses a random secret integer <code>PrivK_A</code><br />  Bob chooses a random secret integer <code>PrivK_B</code></p>
</li>
<li><p><strong>Public Key Calculation:</strong><br />  Alice calculates her public key <code>PubK_A</code> by computing <code>PubK_A = (g ^ PrivK_A) mod  p</code><br />  Bob calculates her public key <code>PubK_B</code> by computing <code>PubK_B = (g ^ PrivK_B) mod  p</code></p>
</li>
<li><p><strong>Exchange of Public Keys</strong>: Alice and Bob exchange their public keys over the insecure channel.</p>
</li>
<li><p><strong>Shared Secret Calculation:</strong></p>
<p>  Alice computes the shared secret <code>secret_A</code> using Bob's public key <code>secret_A = (PubK_B ^ PrivK_A) mod p</code><br />  Bob computes the shared secret <code>secret_B</code> using Alice's public key <code>secret_B = (PubK_A ^ PrivK_B) mod p</code></p>
</li>
<li><p><strong>Key Derivation</strong>: Both Alice and Bob arrive at the same shared secretw, which can now be used as a symmetric encryption key.</p>
</li>
<li><p><strong>Encryption and Decryption</strong>: Now that Alice and Bob share a secret key, they can use symmetric encryption algorithms (like AES) to encrypt and decrypt their messages securely.</p>
</li>
</ul>
<p>While Diffie–Hellman provides a robust framework for key exchange, its security relies on the difficulty of solving the discrete logarithm problem. Factors such as the size of the prime modulus and the choice of base influence the algorithm's resistance to attacks.</p>
<p>Over the years, researchers have identified potential vulnerabilities in Diffie–Hellman, particularly concerning the size of the prime modulus and the risk of precomputation attacks. Mitigating these challenges often involves using larger prime moduli or transitioning to elliptic curve cryptography.</p>
<p>Diffie–Hellman finds extensive use in various cryptographic protocols and systems. It forms the basis for establishing forward secrecy in protocols like Transport Layer Security (TLS) and is employed in public key encryption schemes, password-authenticated key agreement, and more.</p>
<p>Overall, the Diffie-Hellman key exchange provides a secure method for establishing a shared secret key between two parties without prior communication, enabling them to securely communicate over an insecure channel.</p>
<h2 id="heading-implementation">Implementation</h2>
<p>In this example, we'll implement the Diffie–Hellman key exchange algorithm in Go (Golang). This algorithm allows two parties (Alice and Bob) to establish a shared secret over an insecure communication channel. We'll walk through the implementation step by step, explaining each part along the way. Let's get started!</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"crypto/rand"</span>
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"math/big"</span>
)

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">generatePrime</span><span class="hljs-params">()</span> *<span class="hljs-title">big</span>.<span class="hljs-title">Int</span></span> {
    prime, _ := rand.Prime(rand.Reader, <span class="hljs-number">256</span>) <span class="hljs-comment">// Generating a 256-bit prime number</span>
    <span class="hljs-keyword">return</span> prime
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">generatePrivateKey</span><span class="hljs-params">(prime *big.Int)</span> *<span class="hljs-title">big</span>.<span class="hljs-title">Int</span></span> {
    privateKey, _ := rand.Int(rand.Reader, prime)
    <span class="hljs-keyword">return</span> privateKey
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">calculatePublicKey</span><span class="hljs-params">(prime, privateKey, base *big.Int)</span> *<span class="hljs-title">big</span>.<span class="hljs-title">Int</span></span> {
    publicKey := <span class="hljs-built_in">new</span>(big.Int).Exp(base, privateKey, prime)
    <span class="hljs-keyword">return</span> publicKey
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">calculateSharedSecret</span><span class="hljs-params">(prime, privateKey, publicKey *big.Int)</span> *<span class="hljs-title">big</span>.<span class="hljs-title">Int</span></span> {
    sharedSecret := <span class="hljs-built_in">new</span>(big.Int).Exp(publicKey, privateKey, prime)
    <span class="hljs-keyword">return</span> sharedSecret
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Alice and Bob agree on a natural number P a generating element G</span>
    <span class="hljs-comment">// in the finite cyclic group G of order n</span>
    <span class="hljs-comment">// Which is known to everyone</span>
    P := generatePrime()
    G := big.NewInt(<span class="hljs-number">2</span>) <span class="hljs-comment">// G is a primitive root modulo P (generator)</span>

    <span class="hljs-comment">// Alice and Bob generate their private keys</span>
    <span class="hljs-comment">// PrivK = random number between 1 and P-1</span>
    PrivK_A := generatePrivateKey(P)
    PrivK_B := generatePrivateKey(P)

    <span class="hljs-comment">// Alice and Bob calculate their public keys</span>
    <span class="hljs-comment">// PubK = G^PrivK mod P</span>
    PubK_A := calculatePublicKey(P, PrivK_A, G)
    PubK_B := calculatePublicKey(P, PrivK_B, G)

    <span class="hljs-comment">// Now Alice and Bob exchange their public keys</span>
    <span class="hljs-comment">// and calculate the shared secret</span>
    <span class="hljs-comment">// shared_secret = PrivK^PubK mod P</span>
    secret_A := calculateSharedSecret(P, PrivK_A, PubK_B)
    secret_B := calculateSharedSecret(P, PrivK_B, PubK_A)

    <span class="hljs-comment">// If the shared secrets match, then Alice and Bob have</span>
    <span class="hljs-comment">// successfully established a shared secret</span>
    <span class="hljs-keyword">if</span> secret_A.Cmp(secret_B) == <span class="hljs-number">0</span> {
        fmt.Printf(<span class="hljs-string">"Shared secret: %s\n"</span>, secret_A.Text(<span class="hljs-number">16</span>))
    } <span class="hljs-keyword">else</span> {
        fmt.Println(<span class="hljs-string">"Shared secret mismatch"</span>)
    }
}
</code></pre>
<p>In the main part of the code:</p>
<ul>
<li><p>We generate a prime number <code>P</code> and a base number <code>G</code>. These are agreed upon by both parties.</p>
</li>
<li><p>Then, both Alice and Bob generate their own private keys (<code>PrivK_A</code> and <code>PrivK_B</code>) using the <code>generatePrivateKey()</code> function.</p>
</li>
<li><p>They calculate their respective public keys (<code>PubK_A</code> and <code>PubK_B</code>) using the <code>calculatePublicKey()</code> function.</p>
</li>
<li><p>Next, they exchange their public keys.</p>
</li>
<li><p>Finally, they calculate the shared secret using each other's public keys and their own private keys. If the shared secrets match, it means they have successfully established a shared secret.</p>
</li>
</ul>
<p>The purpose of this code is to demonstrate a simplified version of the Diffie-Hellman key exchange algorithm, which allows two parties to securely establish a shared secret over an insecure channel.</p>
]]></content:encoded></item><item><title><![CDATA[API Testing with Playwright: Handbook for beginners]]></title><description><![CDATA[When it comes to testing APIs, having efficient tools and frameworks is crucial. Playwright, a powerful automation library, not only excels in browser automation but also provides seamless API testing capabilities. In this article, we'll delve into h...]]></description><link>https://articles.eminmuhammadi.com/api-testing-with-playwright-handbook-for-beginners</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/api-testing-with-playwright-handbook-for-beginners</guid><category><![CDATA[Python]]></category><category><![CDATA[Testing]]></category><category><![CDATA[playwright]]></category><category><![CDATA[REST API]]></category><category><![CDATA[API TESTING]]></category><category><![CDATA[test-automation]]></category><category><![CDATA[pytest]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Sun, 18 Feb 2024 10:19:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717355144759/64f3b9d8-15f9-48b2-8bdb-d8bfff145c14.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When it comes to testing APIs, having efficient tools and frameworks is crucial. Playwright, a powerful automation library, not only excels in browser automation but also provides seamless <a target="_blank" href="https://playwright.dev/python/docs/api-testing">API testing capabilities</a>. In this article, we'll delve into how Playwright can be leveraged for API testing in Python.</p>
<h2 id="heading-installation">Installation</h2>
<p>To begin with, let's set up our testing environment. We'll need Python installed on our system along with the Playwright library. Playwright can be installed via pip, ensuring compatibility with various operating systems.</p>
<pre><code class="lang-bash">pip install pytest-playwright
</code></pre>
<p>Once Playwright is installed, we'll also need Pytest, a popular testing framework for Python.</p>
<pre><code class="lang-bash">pip install pytest
</code></pre>
<h2 id="heading-understanding-the-test-suite"><strong>Understanding the Test Suite</strong></h2>
<p>Now that our environment is set up, let's take a closer look at the test suite we'll be writing. Our suite consists of test functions targeting different HTTP methods: <code>GET</code>, <code>POST</code>, <code>PUT</code>, and <code>DELETE</code>. These functions utilize the capabilities of the Playwright library to send requests and validate responses.</p>
<pre><code class="lang-python"><span class="hljs-keyword">import</span> os
<span class="hljs-keyword">from</span> typing <span class="hljs-keyword">import</span> Generator

<span class="hljs-keyword">import</span> pytest
<span class="hljs-keyword">from</span> playwright.sync_api <span class="hljs-keyword">import</span> Playwright, APIRequestContext

__BASE_URL__ = <span class="hljs-string">"https://httpbin.dmuth.org/"</span>

<span class="hljs-meta">@pytest.fixture(scope="session")</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">api_request</span>(<span class="hljs-params">
    playwright: Playwright,
</span>) -&gt; Generator[APIRequestContext, <span class="hljs-keyword">None</span>, <span class="hljs-keyword">None</span>]:</span>
    headers = {
        <span class="hljs-string">"Accept"</span>: <span class="hljs-string">"application/json"</span>,
        <span class="hljs-string">"User-Agent"</span>: <span class="hljs-string">"articles.eminmuhammadi.com"</span>,
    }
    request_context = playwright.request.new_context(
        base_url=__BASE_URL__, extra_http_headers=headers
    )
    <span class="hljs-keyword">yield</span> request_context
    request_context.dispose()
</code></pre>
<p>The <code>api_request</code> fixture is responsible for creating a new API request context using Playwright. It sets the base URL for all requests and can include additional headers if needed. After yielding the request context to tests, it ensures proper disposal afterward.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_get</span>(<span class="hljs-params">api_request: APIRequestContext</span>) -&gt; <span class="hljs-keyword">None</span>:</span>
    response = api_request.get(<span class="hljs-string">"/get/args/?foo=bar"</span>)

    <span class="hljs-keyword">assert</span> response.status == <span class="hljs-number">200</span>
    <span class="hljs-keyword">assert</span> response.ok
    <span class="hljs-keyword">assert</span> response.json()[<span class="hljs-string">"foo"</span>] == <span class="hljs-string">"bar"</span>
</code></pre>
<p>This code block defines a test function named <code>test_get</code> that verifies the functionality of a <strong>GET</strong> request to a specific endpoint. The function takes an <code>APIRequestContext</code> object named <code>api_request</code> as a parameter, indicating that it's part of a test suite using pytest. Inside the function, a GET request is sent to the "/get" endpoint using the <code>api_request</code> object, which encapsulates the context for making HTTP requests.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_post</span>(<span class="hljs-params">api_request: APIRequestContext</span>) -&gt; <span class="hljs-keyword">None</span>:</span>
    data = {
        <span class="hljs-string">"foo"</span>: <span class="hljs-string">"bar"</span>,
    }

    response = api_request.post(<span class="hljs-string">"/post"</span>, data=data)

    <span class="hljs-keyword">assert</span> response.status == <span class="hljs-number">200</span>
    <span class="hljs-keyword">assert</span> response.ok
    <span class="hljs-keyword">assert</span> response.json()[<span class="hljs-string">"data"</span>][<span class="hljs-string">"foo"</span>] == <span class="hljs-string">"bar"</span>
</code></pre>
<p>This code block defines a test function named <code>test_post</code> responsible for testing the functionality of a POST request to a specific endpoint. Within the function, a Python dictionary named <code>data</code> is defined, representing the payload to be sent along with the POST request. This payload typically contains key-value pairs of data that the API endpoint expects to receive. The <code>api_request</code> object, which encapsulates the context for making HTTP requests, is then used to send a POST request to the "/post" endpoint, including the defined <code>data</code>.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_put</span>(<span class="hljs-params">api_request: APIRequestContext</span>) -&gt; <span class="hljs-keyword">None</span>:</span>
    data = {
        <span class="hljs-string">"foo"</span>: <span class="hljs-string">"bar"</span>,
    }

    response = api_request.put(<span class="hljs-string">"/put"</span>, data=data)

    <span class="hljs-keyword">assert</span> response.status == <span class="hljs-number">200</span>
    <span class="hljs-keyword">assert</span> response.ok
    <span class="hljs-keyword">assert</span> response.json()[<span class="hljs-string">"data"</span>][<span class="hljs-string">"foo"</span>] == <span class="hljs-string">"bar"</span>
</code></pre>
<p>This code block defines a test function named <code>test_put</code>, specifically designed to assess the functionality of a PUT request to a designated endpoint. Within this function, a Python dictionary named <code>data</code> is created, serving as the payload to be included in the PUT request. This payload typically contains the data to be updated or modified on the server side. The <code>api_request</code> object is then employed to dispatch a PUT request to the "/put" endpoint, incorporating the specified <code>data</code>.</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">test_delete</span>(<span class="hljs-params">api_request: APIRequestContext</span>) -&gt; <span class="hljs-keyword">None</span>:</span>
    response = api_request.delete(<span class="hljs-string">"/delete"</span>)

    <span class="hljs-keyword">assert</span> response.status == <span class="hljs-number">200</span>
    <span class="hljs-keyword">assert</span> response.ok
</code></pre>
<p>This code block defines a test function named <code>test_delete</code> responsible for evaluating the behavior of a DELETE request to a specific endpoint. Within this function, the <code>api_request</code> object, which encapsulates the context for making HTTP requests, is utilized to dispatch a DELETE request to the "/delete" endpoint. This action is intended to remove or delete a resource from the server.</p>
<p>After the request is sent, two assertions are utilized to check the response: initially, it validates that the response's HTTP status code is 200, signifying a successful request, while subsequently ensuring that the response object meets the criteria of being "OK," typically denoted by a status code within the 200-299 range. These assertion procedures are pivotal in verifying the proper handling of GET, POST, PUT, DELETE requests by the API endpoint, thereby safeguarding the integrity and effectiveness of the API's deletion functionalities.</p>
<h2 id="heading-results">Results</h2>
<p>To view test results using <code>pytest-html</code>, you can follow these steps</p>
<pre><code class="lang-bash">pip install pytest-html
</code></pre>
<p>Once installed, you can run pytest with the <code>--html</code> option followed by the desired filename for the HTML report</p>
<pre><code class="lang-bash">pytest --html=report.html
</code></pre>
<p>The HTML report provides a comprehensive overview of the test results, including the number of tests passed, failed, and skipped, along with any errors or failures encountered during testing. You can navigate through the report to view details of each test case, including the test name, status, duration, and any captured output or exceptions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1708251150384/dd971395-6c49-4dd9-867d-d18736951098.png" alt class="image--center mx-auto" /></p>
<p>By following these steps, you can generate and view an HTML report of your test results using <code>pytest-html</code>, providing a more structured and visually appealing way to analyze the outcomes of your test suite.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>In conclusion, testing APIs is essential in ensuring the reliability and functionality of web applications. With Playwright, developers have access to a versatile automation library that not only excels in browser automation but also provides seamless capabilities for API testing in Python.</p>
<p>By utilizing pytest-html, developers can generate comprehensive HTML reports of their test results. These reports offer detailed insights into the test outcomes, including the number of tests passed, failed, or skipped, along with any encountered errors or failures. This structured presentation facilitates easy analysis and debugging, ultimately contributing to the overall quality and reliability of the tested APIs.</p>
<p>In conclusion, leveraging Playwright for API testing in Python, coupled with pytest and pytest-html, provides developers with robust tools and frameworks to streamline the testing process, leading to more resilient and dependable web applications.</p>
]]></content:encoded></item><item><title><![CDATA[Shamir's secret sharing algorithm (Example in Golang)]]></title><description><![CDATA[Shamir's Secret Sharing Algorithm stands out as a powerful tool for distributing and safeguarding secrets. In this article, I explore the complexities of Shamir's Secret Sharing Algorithm, delving into its fundamental principles, practical applicatio...]]></description><link>https://articles.eminmuhammadi.com/shamirs-secret-sharing-algorithm-example-in-golang</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/shamirs-secret-sharing-algorithm-example-in-golang</guid><category><![CDATA[Cryptography]]></category><category><![CDATA[crypto]]></category><category><![CDATA[golang]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[cybersecurity]]></category><category><![CDATA[Computer Science]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Wed, 07 Feb 2024 19:50:29 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717354908571/f5f59383-a8cd-4f41-8e15-e522618e2bb6.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Shamir's Secret Sharing Algorithm stands out as a powerful tool for distributing and safeguarding secrets. In this article, I explore the complexities of Shamir's Secret Sharing Algorithm, delving into its fundamental principles, practical applications, and offering illustrative examples to demonstrate its efficacy.</p>
<h3 id="heading-understanding-shamirs-secret-sharing-algorithm"><strong>Understanding Shamir's Secret Sharing Algorithm</strong></h3>
<p>Shamir's Secret Sharing Algorithm, named after its creator <a target="_blank" href="https://web.mit.edu/6.857/OldStuff/Fall03/ref/Shamir-HowToShareASecret.pdf">Adi Shamir</a>, is a cryptographic algorithm used to distribute a secret among a group of participants, ensuring that the secret can only be reconstructed when a minimum threshold of participants combines their shares. The algorithm relies on polynomial interpolation over finite fields to achieve this.</p>
<p>At the core of Shamir's algorithm lies the concept of polynomial interpolation. In essence, a polynomial of degree <code>t - 1</code> is constructed, where <code>t</code> represents the minimum number of shares required to reconstruct the secret. Each participant is then assigned a share, corresponding to a point on the polynomial curve. With <code>t</code> or more shares, the polynomial can be reconstructed, revealing the original secret.</p>
<h3 id="heading-how-shamirs-secret-sharing-works-an-example"><strong>How Shamir's Secret Sharing Works: An Example</strong></h3>
<p>Consider a scenario where a company wishes to safeguard access to critical data by distributing a secret among three executives. Using Shamir's Secret Sharing Algorithm, the company can ensure that at least two executives must collaborate to access the sensitive information.</p>
<ol>
<li><p><strong>Generating Shares</strong>: The company generates a random polynomial of degree 2 (since they require 3 shares for reconstruction). Let's assume the polynomial is <code>f(x) = a0 + a1x + a2x^2</code>, where <code>a0</code>​ is the secret they want to protect, and <code>a1</code> and <code>a2</code> are randomly chosen coefficients.</p>
</li>
<li><p><strong>Assigning Shares</strong>: The company computes three shares by evaluating the polynomial at three distinct points. Each executive receives one share:</p>
<ul>
<li><p>Executive 1: Receives the share <code>(x1, f(x1))</code></p>
</li>
<li><p>Executive 2: Receives the share <code>(x2, f(x2))</code></p>
</li>
<li><p>Executive 3: Receives the share <code>(x3, f(x3))</code></p>
</li>
</ul>
</li>
<li><p><strong>Reconstruction</strong>: To reconstruct the secret, any two executives combine their shares and interpolate the polynomial. For instance, Executives 1 and 2 collaborate, revealing the secret.</p>
</li>
</ol>
<h3 id="heading-applications-of-shamirs-secret-sharing-algorithm"><strong>Applications of Shamir's Secret Sharing Algorithm</strong></h3>
<p>Shamir's Secret Sharing Algorithm finds applications in various domains, including:</p>
<ul>
<li><p><strong>Cryptography</strong>: Securely distributing encryption keys among multiple parties.</p>
</li>
<li><p><strong>Data Recovery</strong>: Splitting sensitive data into shares to prevent loss and enable recovery.</p>
</li>
<li><p><strong>Access Control</strong>: Requiring multiple parties to authenticate for access to highly sensitive information.</p>
</li>
</ul>
<h2 id="heading-implementation">Implementation</h2>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"fmt"</span>
    <span class="hljs-string">"math/big"</span>
    <span class="hljs-string">"math/rand"</span>
)

<span class="hljs-keyword">var</span> prime *big.Int

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">init</span><span class="hljs-params">()</span></span> {
    prime, _ = big.NewInt(<span class="hljs-number">0</span>).SetString(<span class="hljs-string">"115792089237316195423570985008687907853269984665640564039457584007913129639747"</span>, <span class="hljs-number">10</span>)
}

<span class="hljs-comment">// GenerateCoefficients generates random coefficients for the polynomial</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">GenerateCoefficients</span><span class="hljs-params">(secret *big.Int, threshold, numShares <span class="hljs-keyword">int</span>)</span> []*<span class="hljs-title">big</span>.<span class="hljs-title">Int</span></span> {
    coefficients := <span class="hljs-built_in">make</span>([]*big.Int, threshold)
    coefficients[<span class="hljs-number">0</span>] = secret

    <span class="hljs-keyword">for</span> i := <span class="hljs-number">1</span>; i &lt; threshold; i++ {
        coefficients[i] = big.NewInt(<span class="hljs-keyword">int64</span>(rand.Intn(<span class="hljs-number">256</span>)))
    }

    <span class="hljs-keyword">return</span> coefficients
}

<span class="hljs-comment">// EvaluatePolynomial evaluates the polynomial at the given x value</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">EvaluatePolynomial</span><span class="hljs-params">(coefficients []*big.Int, x *big.Int)</span> *<span class="hljs-title">big</span>.<span class="hljs-title">Int</span></span> {
    result := big.NewInt(<span class="hljs-number">0</span>)
    temp := big.NewInt(<span class="hljs-number">1</span>)

    <span class="hljs-keyword">for</span> _, coefficient := <span class="hljs-keyword">range</span> coefficients {
        term := big.NewInt(<span class="hljs-number">0</span>).Set(coefficient)
        term.Mul(term, temp)
        result.Add(result, term)
        result.Mod(result, prime)
        temp.Mul(temp, x)
        temp.Mod(temp, prime)
    }

    <span class="hljs-keyword">return</span> result
}

<span class="hljs-comment">// SplitSecret splits the secret into shares</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">SplitSecret</span><span class="hljs-params">(secret <span class="hljs-keyword">string</span>, threshold, numShares <span class="hljs-keyword">int</span>)</span> []*<span class="hljs-title">big</span>.<span class="hljs-title">Int</span></span> {
    secretBigInt := <span class="hljs-built_in">new</span>(big.Int).SetBytes([]<span class="hljs-keyword">byte</span>(secret))
    coefficients := GenerateCoefficients(secretBigInt, threshold, numShares)

    shares := <span class="hljs-built_in">make</span>([]*big.Int, numShares)
    <span class="hljs-keyword">for</span> i := <span class="hljs-number">1</span>; i &lt;= numShares; i++ {
        x := big.NewInt(<span class="hljs-keyword">int64</span>(i))
        shares[i<span class="hljs-number">-1</span>] = EvaluatePolynomial(coefficients, x)
    }

    <span class="hljs-keyword">return</span> shares
}

<span class="hljs-comment">// LagrangeInterpolation interpolates the secret using the given shares</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">LagrangeInterpolation</span><span class="hljs-params">(shares []*big.Int, x *big.Int)</span> *<span class="hljs-title">big</span>.<span class="hljs-title">Int</span></span> {
    result := big.NewInt(<span class="hljs-number">0</span>)

    <span class="hljs-keyword">for</span> i, share := <span class="hljs-keyword">range</span> shares {
        numerator := big.NewInt(<span class="hljs-number">1</span>)
        denominator := big.NewInt(<span class="hljs-number">1</span>)

        <span class="hljs-keyword">for</span> j, _ := <span class="hljs-keyword">range</span> shares {
            <span class="hljs-keyword">if</span> i != j {
                numerator.Mul(numerator, big.NewInt(<span class="hljs-number">0</span>).Sub(x, big.NewInt(<span class="hljs-keyword">int64</span>(j+<span class="hljs-number">1</span>))))
                denominator.Mul(denominator, big.NewInt(<span class="hljs-number">0</span>).Sub(big.NewInt(<span class="hljs-keyword">int64</span>(i+<span class="hljs-number">1</span>)), big.NewInt(<span class="hljs-keyword">int64</span>(j+<span class="hljs-number">1</span>))))
            }
        }

        lambda := big.NewInt(<span class="hljs-number">0</span>).Div(numerator, denominator)
        lambda.Mod(lambda, prime)

        term := big.NewInt(<span class="hljs-number">0</span>).Set(share)
        term.Mul(term, lambda)
        result.Add(result, term)
        result.Mod(result, prime)
    }

    <span class="hljs-keyword">return</span> result
}

<span class="hljs-comment">// ReconstructSecret reconstructs the secret from the shares</span>
<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">ReconstructSecret</span><span class="hljs-params">(shares []*big.Int)</span> <span class="hljs-title">string</span></span> {
    secret := LagrangeInterpolation(shares, big.NewInt(<span class="hljs-number">0</span>))
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">string</span>(secret.Bytes())
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    <span class="hljs-comment">// Define parameters</span>
    threshold := <span class="hljs-number">3</span>           <span class="hljs-comment">// Number of shares required to reconstruct the secret</span>
    numShares := <span class="hljs-number">9</span>           <span class="hljs-comment">// Total number of shares</span>
    secret := <span class="hljs-string">"Hello World!"</span> <span class="hljs-comment">// Secret to be shared</span>

    <span class="hljs-comment">// Split the secret into shares</span>
    shares := SplitSecret(secret, threshold, numShares)
    fmt.Println(<span class="hljs-string">"Shares:"</span>, shares)

    <span class="hljs-comment">// Reconstruct the secret from a subset of shares</span>
    reconstructedSecret := ReconstructSecret(shares[:threshold])
    fmt.Println(<span class="hljs-string">"Reconstructed Secret:"</span>, reconstructedSecret)
}
</code></pre>
<p>This code is an implementation of Shamir's Secret Sharing scheme, a method for distributing a secret among a group of participants, each holding a share of the secret. The secret can only be reconstructed when a sufficient number of shares are combined. Here's a breakdown of the code:</p>
<ol>
<li><p><strong>Initialization</strong>: The code initializes a large prime number <code>prime</code> which is used in calculations. This prime number is typically chosen to be sufficiently large to provide security against brute-force attacks.</p>
</li>
<li><p><strong>GenerateCoefficients</strong>: This function generates random coefficients for a polynomial of degree <code>threshold - 1</code>. The first coefficient is set as the secret, and the rest are random numbers between 0 and 255 (inclusive).</p>
</li>
<li><p><strong>EvaluatePolynomial</strong>: Given a set of coefficients and an x value, this function evaluates the polynomial at that x value using Horner's method.</p>
</li>
<li><p><strong>SplitSecret</strong>: This function splits the input secret into shares. It converts the secret to a big integer, generates random coefficients using <code>GenerateCoefficients</code>, evaluates the polynomial at different x values to get the shares.</p>
</li>
<li><p><strong>LagrangeInterpolation</strong>: This function performs Lagrange interpolation to reconstruct the secret from a subset of shares. It uses the Lagrange interpolation formula to interpolate the secret from the given shares.</p>
</li>
<li><p><strong>ReconstructSecret</strong>: This function reconstructs the secret from the shares using Lagrange interpolation.</p>
</li>
<li><p><strong>Main Function</strong>: In the main function, parameters like threshold, total number of shares, and the secret are defined. The secret is split into shares using <code>SplitSecret</code>, and then a subset of shares is used to reconstruct the secret using <code>ReconstructSecret</code>.</p>
</li>
</ol>
<p>This scheme ensures that the secret can be reconstructed only when a sufficient number of shares are combined (determined by the threshold). In this example, the threshold is set to 3, meaning at least 3 shares out of 9 are required to reconstruct the secret.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In today's world, where keeping data safe is really important, Shamir's Secret Sharing Algorithm is a strong way to share and protect secrets. It uses a math technique called polynomial interpolation to make sure that even if some parts of the secret are revealed, the whole secret stays safe. Whether it's used in businesses, to keep passwords secure, or to recover lost data, Shamir's Secret Sharing Algorithm is a key part of keeping information safe and secure.</p>
]]></content:encoded></item><item><title><![CDATA[Secure multi-party computation (Example in Golang)]]></title><description><![CDATA[Multiparty Computation (MPC) is a cryptographic technique that enables multiple parties to jointly compute a function over their private inputs while keeping those inputs confidential. The primary goal of MPC is to allow computations on sensitive dat...]]></description><link>https://articles.eminmuhammadi.com/secure-multi-party-computation-example-in-golang</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/secure-multi-party-computation-example-in-golang</guid><category><![CDATA[crypto]]></category><category><![CDATA[cybersecurity]]></category><category><![CDATA[Cryptography]]></category><category><![CDATA[Go Language]]></category><category><![CDATA[Computer Science]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Sun, 04 Feb 2024 19:08:27 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1707071570820/72a7222c-bbe8-4a09-90ea-11441795a769.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Multiparty Computation (MPC) is a cryptographic technique that enables multiple parties to jointly compute a function over their private inputs while keeping those inputs confidential. The primary goal of MPC is to allow computations on sensitive data without revealing any information about the individual inputs to the other participants. This is achieved through the use of cryptographic protocols that ensure privacy, security, and correctness in collaborative computations.</p>
<p>In traditional computation, when multiple parties want to perform a joint computation, they usually have to share their data with a central authority or each other. This can be a security concern, especially when dealing with sensitive information. MPC addresses this issue by allowing parties to jointly compute a function while keeping their inputs private.</p>
<p>Multiparty Computation (MPC) is a cryptographic technique that enables multiple parties to jointly compute a function over their private inputs while keeping those inputs confidential. The primary goal of MPC is to allow computations on sensitive data without revealing any information about the individual inputs to the other participants. This is achieved through the use of cryptographic protocols that ensure privacy, security, and correctness in collaborative computations.</p>
<p>In traditional computation, when multiple parties want to perform a joint computation, they usually have to share their data with a central authority or each other. This can be a security concern, especially when dealing with sensitive information. MPC addresses this issue by allowing parties to jointly compute a function while keeping their inputs private.</p>
<p>Key characteristics of Multiparty Computation:</p>
<ul>
<li><p><strong>Privacy Preservation:</strong> MPC ensures that no party learns anything about the private inputs of others, except for what can be inferred from the publicly revealed output.</p>
</li>
<li><p><strong>Security:</strong> The cryptographic protocols used in MPC are designed to withstand malicious behaviors, ensuring that parties cannot manipulate the computation to gain information about others' inputs.</p>
</li>
<li><p><strong>Correctness:</strong> The final output of the computation is accurate, even though the computation is performed on encrypted or masked versions of the inputs.</p>
</li>
<li><p><strong>Distributed Trust:</strong> Unlike traditional systems that rely on a central authority, MPC distributes trust among the participating parties, making it suitable for scenarios where mutual trust is limited.</p>
</li>
</ul>
<p>MPC protocols often involve complex cryptographic techniques such as homomorphic encryption, secret sharing, and secure multi-party computation protocols. While these protocols add computational overhead, they provide a powerful framework for secure collaboration in scenarios where data privacy is paramount.</p>
<h2 id="heading-implementation">Implementation</h2>
<p>In this Go implementation, we'll delve into a simplified MPC scenario involving three parties: Alice, Bob, and Charlie.</p>
<pre><code class="lang-go"><span class="hljs-keyword">package</span> main

<span class="hljs-keyword">import</span> (
    <span class="hljs-string">"math/rand"</span>
)

<span class="hljs-keyword">type</span> Node <span class="hljs-keyword">struct</span> {
    Value  <span class="hljs-keyword">float64</span>
    Inbox  []<span class="hljs-keyword">float64</span>
    Outbox []<span class="hljs-keyword">float64</span>
    Z      <span class="hljs-keyword">float64</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(node *Node)</span> <span class="hljs-title">RandomFloat64</span><span class="hljs-params">()</span> <span class="hljs-title">float64</span></span> {
    min := <span class="hljs-number">1.0</span>
    max := <span class="hljs-number">999999.0</span>

    <span class="hljs-keyword">return</span> min + rand.Float64()*(max-min)
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">CreateNode</span><span class="hljs-params">(value <span class="hljs-keyword">float64</span>)</span> *<span class="hljs-title">Node</span></span> {
    <span class="hljs-keyword">return</span> &amp;Node{Value: value}
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(node *Node)</span> <span class="hljs-title">SendValueToNode</span><span class="hljs-params">(otherNode *Node)</span></span> {
    outboxVal := node.RandomFloat64()
    node.Outbox = <span class="hljs-built_in">append</span>(node.Outbox, outboxVal)
    otherNode.Inbox = <span class="hljs-built_in">append</span>(otherNode.Inbox, outboxVal)
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(node *Node)</span> <span class="hljs-title">PublishZ</span><span class="hljs-params">()</span></span> {
    z := node.Value + (Sum(node.Inbox) - Sum(node.Outbox))
    node.Z = z
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">Sum</span><span class="hljs-params">(val []<span class="hljs-keyword">float64</span>)</span> <span class="hljs-title">float64</span></span> {
    <span class="hljs-keyword">var</span> sum <span class="hljs-keyword">float64</span>
    <span class="hljs-keyword">for</span> _, v := <span class="hljs-keyword">range</span> val {
        sum += v
    }
    <span class="hljs-keyword">return</span> sum
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">Average</span><span class="hljs-params">(z []<span class="hljs-keyword">float64</span>)</span> <span class="hljs-title">float64</span></span> {
    <span class="hljs-keyword">var</span> sum <span class="hljs-keyword">float64</span>
    <span class="hljs-keyword">for</span> _, v := <span class="hljs-keyword">range</span> z {
        sum += v
    }
    avg := sum / <span class="hljs-keyword">float64</span>(<span class="hljs-built_in">len</span>(z))
    <span class="hljs-keyword">return</span> avg
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">main</span><span class="hljs-params">()</span></span> {
    Alice := CreateNode(<span class="hljs-number">1000.00</span>)
    Bob := CreateNode(<span class="hljs-number">2000.00</span>)
    Charlie := CreateNode(<span class="hljs-number">3000.00</span>)

    Alice.SendValueToNode(Bob)
    Alice.SendValueToNode(Charlie)

    Bob.SendValueToNode(Alice)
    Bob.SendValueToNode(Charlie)

    Charlie.SendValueToNode(Alice)
    Charlie.SendValueToNode(Bob)

    Alice.PublishZ()
    Bob.PublishZ()
    Charlie.PublishZ()

    z := []<span class="hljs-keyword">float64</span>{Alice.Z, Bob.Z, Charlie.Z}
    avg := Average(z)

    <span class="hljs-built_in">println</span>(<span class="hljs-string">"Average: "</span>, avg)
}
</code></pre>
<p>In the provided Go code, the <code>Node</code> struct serves as a blueprint for representing individual participants in a multiparty computation. Let's break down the key components of the <code>Node</code> struct:</p>
<pre><code class="lang-go"><span class="hljs-keyword">type</span> Node <span class="hljs-keyword">struct</span> {
    ID     <span class="hljs-keyword">string</span>
    Val    <span class="hljs-keyword">float64</span>
    Inbox  []<span class="hljs-keyword">float64</span>
    Outbox []<span class="hljs-keyword">float64</span>
    Z      <span class="hljs-keyword">float64</span>
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-title">CreateNode</span><span class="hljs-params">(value <span class="hljs-keyword">float64</span>)</span> *<span class="hljs-title">Node</span></span> {
    <span class="hljs-keyword">return</span> &amp;Node{Value: value}
}
</code></pre>
<ul>
<li><p><strong>ID:</strong> The <code>ID</code> field is a unique identifier assigned to each node. It distinguishes one participant from another within the computation.</p>
</li>
<li><p><strong>Val:</strong> The <code>Val</code> field represents the initial private value associated with the node. This value serves as the starting point for the node's contribution to the collaborative computation.</p>
</li>
<li><p><strong>Inbox:</strong> The <code>Inbox</code> field is an array that functions as a receptacle for incoming values from other nodes. During the computation, nodes securely exchange information, and the <code>Inbox</code> stores these received values.</p>
</li>
<li><p><strong>Outbox:</strong> The <code>Outbox</code> field, also an array, acts as a repository for outgoing values. As part of the secure communication process, each node generates random values and shares them with other nodes, appending these values to their respective <code>Outbox</code>.</p>
</li>
<li><p><strong>Z:</strong> The <code>Z</code> field represents the final computed value of the node. It is determined based on the node's initial value (<code>Val</code>), the values received from other nodes (<code>Inbox</code>), and the values sent to other nodes (<code>Outbox</code>). The computation results in a secure, jointly computed value that contributes to the overall collaborative result.</p>
</li>
</ul>
<pre><code class="lang-go"><span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(node *Node)</span> <span class="hljs-title">SendValueToNode</span><span class="hljs-params">(otherNode *Node)</span></span> {
    outboxVal := node.RandomFloat64()
    node.Outbox = <span class="hljs-built_in">append</span>(node.Outbox, outboxVal)
    otherNode.Inbox = <span class="hljs-built_in">append</span>(otherNode.Inbox, outboxVal)
}

<span class="hljs-function"><span class="hljs-keyword">func</span> <span class="hljs-params">(node *Node)</span> <span class="hljs-title">PublishZ</span><span class="hljs-params">()</span></span> {
    z := node.Value + (Sum(node.Inbox) - Sum(node.Outbox))
    node.Z = z
}
</code></pre>
<p>The <code>SendValueToNode</code> method in the <code>Node</code> struct simulates a secure communication between two nodes. It generates a random float64 value using <code>RandomFloat64</code>, appends this value to the sender's (<code>node</code>) outbox, and then appends the same value to the receiver's (<code>otherNode</code>) inbox, mimicking a secure exchange of information between nodes.</p>
<p>The <code>PublishZ</code> method calculates the final value (<code>Z</code>) for a node in a multiparty computation. It uses the node's initial value (<code>Value</code>), subtracts the sum of values in its outbox from the sum of values in its inbox, and then adds the result to the initial value. The final computed value is stored in the <code>Z</code> field of the node.</p>
<p><strong>In result,</strong> three nodes, representing individuals Alice, Bob, and Charlie, are created with initial values of 1000.00, 2000.00, and 3000.00, respectively. They then engage in a secure exchange of random values using the <code>SendValueToNode</code> method, where each node sends a random value to the other two nodes.</p>
<p>Following the exchange, each node publishes its final computed value (<code>Z</code>) using the <code>PublishZ</code> method. The final <code>Z</code> values for Alice, Bob, and Charlie are then collected into an array. Finally, the average of these computed values is calculated using the <code>Average</code> function, and the result is printed, representing the collaborative outcome of the multiparty computation. The code simulates a basic scenario of secure collaboration among multiple nodes, computing an average value without revealing individual inputs.</p>
]]></content:encoded></item><item><title><![CDATA[Exploring the Pros and Cons of Robot Framework in Test Automation]]></title><description><![CDATA[In the fast-paced world of software testing, selecting the right test automation framework is a crucial decision that can profoundly impact the success of a project. As we venture into 2024, Robot Framework continues to be a notable contender in the ...]]></description><link>https://articles.eminmuhammadi.com/exploring-the-pros-and-cons-of-robot-framework-in-test-automation</link><guid isPermaLink="true">https://articles.eminmuhammadi.com/exploring-the-pros-and-cons-of-robot-framework-in-test-automation</guid><category><![CDATA[Software Testing]]></category><category><![CDATA[robotframework]]></category><category><![CDATA[Testing]]></category><category><![CDATA[QA]]></category><category><![CDATA[Quality Assurance]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[Automation Test Framework]]></category><category><![CDATA[test-automation]]></category><dc:creator><![CDATA[Emin Muhammadi]]></dc:creator><pubDate>Sun, 14 Jan 2024 18:05:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705254800760/c3dac0b4-3b71-49fa-bcc7-3cca0b745ebd.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the fast-paced world of software testing, selecting the right test automation framework is a crucial decision that can profoundly impact the success of a project. As we venture into 2024, <a target="_blank" href="https://robotframework.org/"><strong>Robot Framework</strong></a> continues to be a notable contender in the test automation arena. Let's explore the pros and cons of Robot Framework in greater detail, accompanied by real-world code samples to illustrate its practical application.</p>
<h2 id="heading-pros-of-robot-framework"><strong>Pros of Robot Framework</strong></h2>
<h3 id="heading-human-readable-syntax"><strong>Human-Readable Syntax</strong></h3>
<pre><code class="lang-plaintext">*** Test Cases ***
Verify Login Functionality
    [Documentation]    Verify user can log in with valid credentials
    Open Browser    https://www.example.com    chrome
    Input Text    username_field    testuser
    Input Text    password_field    testpassword
    Click Button    login_button
    Page Should Contain    Welcome, testuser!
    Close Browser
</code></pre>
<p>Robot Framework's human-readable syntax is evident in the above example. Test cases are written in plain text, making them accessible to team members regardless of their technical background. This readability promotes collaboration and simplifies the understanding of test cases.</p>
<h3 id="heading-extensibility"><strong>Extensibility</strong></h3>
<pre><code class="lang-plaintext">*** Settings ***
Library    SeleniumLibrary
Library    RequestsLibrary
Library    DatabaseLibrary

*** Test Cases ***
Verify User Registration
    [Documentation]    Verify successful user registration
    ${user_id}    Generate Unique ID
    Create User    ${user_id}    John Doe    john@example.com    secretpassword
    ${user_info}    Get User Info    ${user_id}
    Should Be Equal As Strings    ${user_info['username']}    John Doe
</code></pre>
<p>Robot Framework's extensibility is showcased in the ability to integrate various libraries seamlessly. In this example, <a target="_blank" href="https://docs.robotframework.org/docs/different_libraries/selenium"><strong>SeleniumLibrary</strong></a>, <a target="_blank" href="https://docs.robotframework.org/docs/different_libraries/requests"><strong>RequestsLibrary</strong></a>, and <a target="_blank" href="https://docs.robotframework.org/docs/different_libraries/database"><strong>DatabaseLibrary</strong></a> are combined to test user registration across different layers of an application.</p>
<h3 id="heading-cross-platform-support"><strong>Cross-Platform Support</strong></h3>
<pre><code class="lang-plaintext">*** Test Cases ***
Verify Cross-Browser Compatibility
    [Documentation]    Verify application works on different browsers
    Open Browser    https://www.example.com    chrome
    ...    executable_path=/path/to/chromedriver
    Login    testuser    testpassword
    Close Browser

    Open Browser    https://www.example.com    firefox
    ...    executable_path=/path/to/geckodriver
    Login    testuser    testpassword
    Close Browser
</code></pre>
<p>Robot Framework's cross-platform support is exemplified in the ability to execute tests across different browsers effortlessly. This flexibility ensures that applications are validated for consistent behavior across various environments.</p>
<h3 id="heading-rich-ecosystem-of-libraries">Rich Ecosystem of Libraries</h3>
<pre><code class="lang-plaintext">*** Settings ***
Library    Collections
Library    FakerLibrary

*** Test Cases ***
Generate Random Data
    [Documentation]    Verify generation of random data
    ${random_name}    Fake Name
    Should Match Regexp    ${random_name}    [A-Z][a-z]+
</code></pre>
<p>Robot Framework's rich ecosystem of libraries is illustrated in this example. The <a target="_blank" href="https://robotframework.org/robotframework/latest/libraries/Collections.html"><strong>Collections</strong></a> library and the <a target="_blank" href="https://guykisel.github.io/robotframework-faker/"><strong>FakerLibrary</strong></a> are employed to generate random data for testing purposes, showcasing the diverse functionalities provided by external libraries.</p>
<h3 id="heading-keyword-driven-testing"><strong>Keyword-Driven Testing</strong></h3>
<pre><code class="lang-plaintext">*** Test Cases ***
Search Product
    [Documentation]    Search for a product using keywords
    Open Browser To Homepage
    Enter Search Keyword    Robot Framework
    Click Search Button
    Verify Search Results    Robot Framework
</code></pre>
<p>Keyword-driven testing is a fundamental concept in Robot Framework. Test cases are composed using descriptive keywords, promoting test case organization and simplifying maintenance.</p>
<h2 id="heading-cons-of-robot-framework"><strong>Cons of Robot Framework</strong></h2>
<h3 id="heading-learning-curve-for-non-python-users"><strong>Learning Curve for Non-Python Users</strong></h3>
<pre><code class="lang-plaintext">*** Test Cases ***
Custom Keyword Example
    [Documentation]    Example of a custom keyword
    ${result}    Custom Keyword    argument1    argument2
    Should Be Equal As Strings    ${result}    Expected Result
</code></pre>
<p>For individuals without prior Python knowledge, there might be a learning curve. While Robot Framework is designed for ease of use, understanding basic Python concepts can enhance the ability to create custom keywords and extend the framework's capabilities.</p>
<h3 id="heading-limited-support-for-advanced-programming-constructs"><strong>Limited Support for Advanced Programming Constructs</strong></h3>
<pre><code class="lang-plaintext">*** Test Cases ***
Advanced Programming Example
    [Documentation]    Example using advanced programming constructs
    ${result}    Evaluate    2 + 2
    Should Be Equal As Numbers    ${result}    4
</code></pre>
<p>For projects requiring advanced programming constructs, Robot Framework may have limitations. While it supports basic programming operations, more intricate requirements might necessitate the use of a different test automation framework.</p>
<h3 id="heading-performance">Performance</h3>
<pre><code class="lang-plaintext">*** Test Cases ***
Performance Testing Example
    [Documentation]    Example of a performance test
    ${response_time}    Measure Response Time    https://www.example.com
    Should Be Less Than    ${response_time}    5000
</code></pre>
<p>In scenarios with large test suites, Robot Framework may encounter performance issues. It's important to assess the impact on execution times, especially for projects with stringent performance requirements.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In conclusion, Robot Framework continues to be a compelling choice for test automation in 2024. Its human-readable syntax, extensibility, cross-platform support, rich library ecosystem, and keyword-driven testing make it a versatile tool for a wide range of testing scenarios. However, teams should carefully consider factors such as the learning curve, performance considerations, and the nature of their projects when choosing Robot Framework. By understanding both its strengths and limitations, teams can make informed decisions and harness the power of Robot Framework effectively in their testing endeavors. The inclusion of real-world code samples helps illustrate the practical application of these pros and cons, providing a comprehensive overview of the framework's capabilities.</p>
]]></content:encoded></item></channel></rss>