<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA["Mastering DevOps, Full Stack Development, AWS & Cutting-Edge Tech | Insights & Tutorials]]></title><description><![CDATA[tay ahead in the world of DevOps, full stack development, AWS, and emerging technologies. Discover expert tutorials, insights, and the latest trends to enhance your skills and knowledge.]]></description><link>https://basir.devsomeware.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 26 Apr 2026 02:12:07 GMT</lastBuildDate><atom:link href="https://basir.devsomeware.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Writing Node.js in 2025? These New Features & Practices Are Non-Negotiable]]></title><description><![CDATA[If you’re still writing Node.js apps the “old way” (CommonJS modules, dotenv, external test libs, etc), it’s time to upgrade. Modern versions of Node.js offer several built-in capabilities that reduce dependencies, simplify workflow, and improve perf...]]></description><link>https://basir.devsomeware.com/writing-nodejs-in-2025-these-new-features-and-practices-are-non-negotiable</link><guid isPermaLink="true">https://basir.devsomeware.com/writing-nodejs-in-2025-these-new-features-and-practices-are-non-negotiable</guid><category><![CDATA[Node.js]]></category><category><![CDATA[TypeScript]]></category><category><![CDATA[backend]]></category><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Sun, 23 Nov 2025 14:27:13 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1763907962862/c1e28472-2a62-4b88-b55b-67d5d413d91b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>If you’re still writing Node.js apps the “old way” (CommonJS modules, dotenv, external test libs, etc), it’s time to upgrade. Modern versions of Node.js offer several built-in capabilities that reduce dependencies, simplify workflow, and improve performance. In this blog I’ll walk you through <strong>8 key built-in features</strong> you should adopt right now with short code samples, clear guidance and caveats so you can migrate confidently.</p>
<h2 id="heading-1-embrace-the-latest-import-syntax-esm">1. Embrace the latest import syntax (ESM)</h2>
<p>Stop using the old <code>require()</code>/CommonJS style if you can. Node.js fully supports ECMAScript modules (ESM) and it’s time to use them. <a target="_blank" href="https://nodejs.org/api/esm.html?utm_source=chatgpt.com">Node.js+2W3Schools+2</a></p>
<p><strong>Old (outdated):</strong></p>
<pre><code class="lang-bash">const express = require(<span class="hljs-string">'express'</span>);
module.exports = someFunction;
</code></pre>
<p><strong>Recommended (ESM):</strong></p>
<pre><code class="lang-bash">import express from <span class="hljs-string">'express'</span>;
<span class="hljs-built_in">export</span> default someFunction;
</code></pre>
<p>Set <code>"type": "module"</code> in your <code>package.json</code>, or use <code>.mjs</code> extensions, and you’re good. Going ESM from day one is cleaner, more aligned with modern JS, and avoids interoperability headaches.</p>
<hr />
<h2 id="heading-2-use-processenv-via-run-command-instead-of-dotenv-in-many-cases">2. Use <code>process.env</code> via run command instead of dotenv (in many cases)</h2>
<p>I know many of us used <code>dotenv.config()</code> at the top of our apps. That still works, but recent Node.js updates allow you to pass environment files at runtime without requiring <code>dotenv</code>. That’s one less dependency to manage.</p>
<p><strong>Old pattern:</strong></p>
<pre><code class="lang-bash">import dotenv from <span class="hljs-string">'dotenv'</span>;
dotenv.config();

console.log(process.env.MY_SECRET);
</code></pre>
<p><strong>Recommended runtime command:</strong></p>
<pre><code class="lang-bash">node --env-file=.env app.js
</code></pre>
<p>This way you can skip or minimise <code>dotenv</code> usage (especially in production) and rely more on the runtime’s built-in env file support. It keeps things simple and reduces third-party reliance.</p>
<hr />
<h2 id="heading-3-built-in-test-runner-in-nodejs">3. Built-in test runner in Node.js</h2>
<p>Remember when you had to install and configure Jest, Mocha or Vite for testing? That’s changing. Node.js now ships with a built-in test runner. <a target="_blank" href="https://nodejs.org/api/test.html?utm_source=chatgpt.com">Node.js+2LogRocket Blog+2</a></p>
<p><strong>Sample code:</strong></p>
<pre><code class="lang-bash">// math.test.mjs
import { <span class="hljs-built_in">test</span> } from <span class="hljs-string">'node:test'</span>;
import assert from <span class="hljs-string">'node:assert/strict'</span>;
import { add } from <span class="hljs-string">'./math.js'</span>;

<span class="hljs-built_in">test</span>(<span class="hljs-string">'adds two numbers'</span>, () =&gt; {
  assert.strictEqual(add(2, 3), 5);
});
</code></pre>
<p>Then simply run:</p>
<pre><code class="lang-bash">node --<span class="hljs-built_in">test</span>
</code></pre>
<p>Benefits: zero additional dependency, minimal config, and smoother dev experience. Of course, if you need advanced features, you may still pick Jest/Mocha, but for many use-cases this built-in runner suffices.</p>
<hr />
<h2 id="heading-4-native-sqlite-support-yes-really">4. Native SQLite support (yes, really)</h2>
<p>One of the big surprises: Node.js v22.5.0 introduced a built-in experimental module for SQLite: <code>node:sqlite</code>. <a target="_blank" href="https://betterstack.com/community/guides/scaling-nodejs/nodejs-sqlite/?utm_source=chatgpt.com">Better Stack+2LogRocket Blog+2</a></p>
<p><strong>Note (call-out):</strong> Use the special prefix <code>node:packageName</code> when importing built-in modules. This ensures you’re referencing a built-in library, not a third-party with the same name (security + clarity win).</p>
<p><strong>Example usage:</strong></p>
<pre><code class="lang-bash">import { DatabaseSync } from <span class="hljs-string">'node:sqlite'</span>;

const db = new DatabaseSync(<span class="hljs-string">'example.db'</span>);

db.run(<span class="hljs-string">'CREATE TABLE users(id INTEGER PRIMARY KEY, name TEXT)'</span>);
db.run(<span class="hljs-string">'INSERT INTO users(name) VALUES (?)'</span>, <span class="hljs-string">'Alice'</span>);

const row = db.get(<span class="hljs-string">'SELECT * FROM users WHERE name = ?'</span>, <span class="hljs-string">'Alice'</span>);
console.log(row);
</code></pre>
<p>⚠️ Keep in mind: still marked experimental, fewer features than popular third-party libs (like concurrency, async API). Use with caution for production. <a target="_blank" href="https://blog.logrocket.com/using-built-in-sqlite-module-node-js/?utm_source=chatgpt.com">LogRocket Blog</a></p>
<hr />
<h2 id="heading-5-type-skipper-support-for-typescript-files">5. “Type-skipper” support for TypeScript files</h2>
<p>If you’re doing TypeScript, there’s good news: recent Node.js versions let you run <code>.ts</code> files directly (for the parts that just use types). <a target="_blank" href="https://nodejs.org/api/typescript.html?utm_source=chatgpt.com">Node.js+1</a></p>
<p><strong>Example:</strong></p>
<pre><code class="lang-bash">// app.ts
const greet = (name: string): string =&gt; {
  <span class="hljs-built_in">return</span> `Hello, <span class="hljs-variable">${name}</span>`;
};

console.log(greet(<span class="hljs-string">'World'</span>));
</code></pre>
<p>You can run:</p>
<pre><code class="lang-bash">node app.ts
</code></pre>
<p><strong>Important call-out:</strong> This only supports <em>type stripping</em> (erasable TypeScript syntax). Full TS features (enums, namespaces, fancy transforms) still need compile/transpile. Use this feature vigilantly, understand its limitations, and stay tuned for future updates.</p>
<hr />
<h2 id="heading-6-replace-nodemon-with-nodes-built-in-watch-flag">6. Replace nodemon with Node’s built-in <code>--watch</code> flag</h2>
<p>You probably used nodemon for auto-restarting on file changes. Now, Node.js offers a built-in flag <code>--watch</code>. Make your life simpler.</p>
<p><strong>Example:</strong></p>
<pre><code class="lang-bash">node --watch server.js
</code></pre>
<p>On changes, Node will auto-reload. Fewer dependencies and tools to maintain.</p>
<hr />
<h2 id="heading-7-built-in-fetch-no-more-axios-or-node-fetch">7. Built-in <code>fetch</code> (no more axios or node-fetch)</h2>
<p>Modern Node.js versions include the Fetch API natively—just like in browsers. That means you don’t always have to import axios or <code>node-fetch</code>.</p>
<p><strong>Example:</strong></p>
<pre><code class="lang-bash">const response = await fetch(<span class="hljs-string">'https://api.example.com/data'</span>);
const data = await response.json();
console.log(data);
</code></pre>
<p>Saves you package installations, simplifies your stack.</p>
<hr />
<h2 id="heading-8-super-fast-dev-script-with-node-run-dev-yes-no-npm-needed">8. Super-fast dev script with <code>node --run dev</code> (yes, no npm needed)</h2>
<p>This is one of the cleanest improvements in modern Node.js.<br />You <strong>don’t need</strong> slow <code>npm run dev</code> anymore because npm adds extra overhead (pre-scripts, post-scripts, lifecycle steps, etc).</p>
<p>Now Node.js gives you a <strong>native script runner</strong> using:</p>
<pre><code>node --run dev
</code></pre><p>So you define your scripts in <code>package.json</code> like this:</p>
<pre><code>{
  <span class="hljs-string">"scripts"</span>: {
    <span class="hljs-string">"dev"</span>: <span class="hljs-string">"node --watch server.js"</span>
  }
}
</code></pre><p>And instead of:</p>
<pre><code class="lang-bash">npm run dev
</code></pre>
<p>You can directly run:</p>
<pre><code class="lang-bash">node --run dev
</code></pre>
<p>This is <strong>lightning fast</strong>, no npm layer, no extra processing.<br />Just pure Node executing your script instantly , perfect for local development.</p>
<hr />
<h3 id="heading-final-thoughts">Final Thoughts</h3>
<p>If you’re working with Node.js right now (especially v22+), start shifting to these built-in features. They genuinely cut down your dependency list, reduce config headaches, and make your whole dev workflow faster and cleaner. This is the direction Node.js is moving toward, so adopting early gives you an edge.</p>
<p>But remember not everything is 100% production-ready. Features like native SQLite and TypeScript type-skipping are still evolving. Use them smartly, test properly, and make sure your project actually benefits from the switch.</p>
<p>Node.js is changing fast, and these built-in upgrades are going to become the new “default way” of writing backend apps. So stay updated, experiment, and keep your stack modern.</p>
<p>More improvements are coming , stay tuned.</p>
]]></content:encoded></item><item><title><![CDATA[What is kafka?]]></title><description><![CDATA[Kafka in detailed !
https://kafka.apache.org/

What is distributed
You can scale kafka horizontally by adding more nodes that run your kafka brokers
Event streaming
If you want to build a system where one process produces events that can be consumed ...]]></description><link>https://basir.devsomeware.com/what-is-kafka</link><guid isPermaLink="true">https://basir.devsomeware.com/what-is-kafka</guid><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Sat, 08 Nov 2025 07:25:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762584549996/35f7daa2-4700-4501-a8b7-139b459aedeb.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-kafka-in-detailed"><strong>Kafka in detailed !</strong></h1>
<p><a target="_blank" href="https://kafka.apache.org/">https://kafka.apache.org/</a></p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2Fdd2015a1-5c5c-4ef3-91ad-2c6a8c92600c%2FScreenshot_2024-07-10_at_2.40.11_PM.png?table=block&amp;id=ed871567-37d8-44cf-9634-1858ab454b54&amp;cache=v2" alt="notion image" /></p>
<h4 id="heading-what-is-distributed"><strong>What is distributed</strong></h4>
<p>You can scale kafka horizontally by adding more nodes that run your kafka <code>brokers</code></p>
<h4 id="heading-event-streaming"><strong>Event streaming</strong></h4>
<p>If you want to build a system where one process <code>produces</code> events that can be consumed by multiple <code>consumers</code></p>
<h4 id="heading-examples-of-apps"><strong>Examples of apps</strong></h4>
<p>Payment notifications</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2Feadf2285-35e0-4b6e-bdc6-6459d7ad2223%2FScreenshot_2024-07-10_at_2.47.40_PM.png?table=block&amp;id=93fb3e9a-3e77-4670-a7d8-1762154c2f57&amp;cache=v2" alt="notion image" /></p>
<h1 id="heading-jargon"><strong>Jargon</strong></h1>
<h4 id="heading-cluster-and-broker"><strong>Cluster and broker</strong></h4>
<p>A group of machines running kafka are known as a kafka cluster</p>
<p>Each individual machine is called a broker</p>
<h4 id="heading-producers"><strong>Producers</strong></h4>
<p>As the name suggests, producers are used to <code>publish</code> data to a topic</p>
<h4 id="heading-consumers"><strong>Consumers</strong></h4>
<p>As the name suggests, consumers consume from a topic</p>
<h4 id="heading-topics"><strong>Topics</strong></h4>
<p>A topic is a logical channel to which producers send messages and from which consumers read messages.</p>
<h4 id="heading-offsets"><strong>Offsets</strong></h4>
<p>Consumers keep track of their position in the topic by maintaining offsets, which represent the position of the last consumed message. Kafka can manage offsets automatically or allow consumers to manage them manually.</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2Fa98a741c-5ae0-41dc-b5cb-d8246ad491fb%2FScreenshot_2024-07-10_at_3.30.42_PM.png?table=block&amp;id=ffeb8e21-320d-4b3c-b159-4a7bf83a35ed&amp;cache=v2" alt="notion image" /></p>
<h4 id="heading-retention"><strong>Retention</strong></h4>
<p>Kafka topics have configurable retention policies, determining how long data is stored before being deleted. This allows for both real-time processing and historical data replay.</p>
<h4 id="heading-partitions-and-consumer-groups-we-will-cover-eventually"><strong>Partitions and Consumer groups (we will cover eventually)</strong></h4>
<h1 id="heading-start-kafka-locally"><strong>Start kafka locally</strong></h1>
<p>Ref - <a target="_blank" href="https://kafka.apache.org/quickstart#quickstart_createtopic">https://kafka.apache.org/quickstart</a></p>
<h4 id="heading-using-docker"><strong>Using docker</strong></h4>
<pre><code class="lang-bash">docker run -p 9092:9092 apache/kafka:3.7.1
</code></pre>
<h4 id="heading-get-shell-access-to-container"><strong>Get shell access to container</strong></h4>
<pre><code class="lang-bash">docker ps
docker <span class="hljs-built_in">exec</span> -it container_id /bin/bash
<span class="hljs-built_in">cd</span> /opt/kafka/bin
</code></pre>
<h4 id="heading-create-a-topic"><strong>Create a topic</strong></h4>
<pre><code class="lang-bash">./kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092
</code></pre>
<h4 id="heading-publish-to-the-topic"><strong>Publish to the topic</strong></h4>
<pre><code class="lang-bash">./kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
</code></pre>
<h4 id="heading-consuming-from-the-topic"><strong>Consuming from the topic</strong></h4>
<pre><code class="lang-bash">./kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
</code></pre>
<h2 id="heading-kafka-in-a-nodejs-process"><strong>Kafka in a Node.js process</strong></h2>
<p>Ref <a target="_blank" href="https://www.npmjs.com/package/kafkajs">https://www.npmjs.com/package/kafkajs</a></p>
<ul>
<li>Initialise project</li>
</ul>
<pre><code class="lang-javascript">npm init -y
npx tsc --init
</code></pre>
<ul>
<li>Update package.json</li>
</ul>
<pre><code class="lang-javascript"><span class="hljs-string">"rootDir"</span>: <span class="hljs-string">"./src"</span>,
<span class="hljs-string">"outDir"</span>: <span class="hljs-string">"./dist"</span>
</code></pre>
<ul>
<li>Add <code>src/index.ts</code></li>
</ul>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { Kafka } <span class="hljs-keyword">from</span> <span class="hljs-string">"kafkajs"</span>;

<span class="hljs-keyword">const</span> kafka = <span class="hljs-keyword">new</span> Kafka({
  <span class="hljs-attr">clientId</span>: <span class="hljs-string">"my-app"</span>,
  <span class="hljs-attr">brokers</span>: [<span class="hljs-string">"localhost:9092"</span>]
})

<span class="hljs-keyword">const</span> producer = kafka.producer();

<span class="hljs-keyword">const</span> consumer = kafka.consumer({<span class="hljs-attr">groupId</span>: <span class="hljs-string">"my-app3"</span>});


<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">main</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">await</span> producer.connect();
  <span class="hljs-keyword">await</span> producer.send({
    <span class="hljs-attr">topic</span>: <span class="hljs-string">"quickstart-events"</span>,
    <span class="hljs-attr">messages</span>: [{
      <span class="hljs-attr">value</span>: <span class="hljs-string">"hi there"</span>
    }]
  })

  <span class="hljs-keyword">await</span> consumer.connect();
  <span class="hljs-keyword">await</span> consumer.subscribe({
    <span class="hljs-attr">topic</span>: <span class="hljs-string">"quickstart-events"</span>, <span class="hljs-attr">fromBeginning</span>: <span class="hljs-literal">true</span>
  })

  <span class="hljs-keyword">await</span> consumer.run({
    <span class="hljs-attr">eachMessage</span>: <span class="hljs-keyword">async</span> ({ topic, partition, message }) =&gt; {
      <span class="hljs-built_in">console</span>.log({
        <span class="hljs-attr">offset</span>: message.offset,
        <span class="hljs-attr">value</span>: message?.value?.toString(),
      })
    },
  })
}


main();
</code></pre>
<ul>
<li>Update package.json</li>
</ul>
<pre><code class="lang-javascript"><span class="hljs-string">"scripts"</span>: {
    <span class="hljs-string">"start"</span>: <span class="hljs-string">"tsc -b &amp;&amp; node dist/index.js"</span>
},
</code></pre>
<ul>
<li>Start the process</li>
</ul>
<pre><code class="lang-javascript">npm run start
</code></pre>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2F35ceceba-a3df-4d1a-b941-b01306d9d5ea%2FScreenshot_2024-07-10_at_4.17.21_PM.png?table=block&amp;id=47e3b5d5-8e84-4c5c-b613-36350ad282da&amp;cache=v2" alt="notion image" /></p>
<h1 id="heading-breaking-into-prodcuer-and-consumer-scripts"><strong>Breaking into prodcuer and consumer scripts</strong></h1>
<p>Lets break our logic down into two saparate files</p>
<ul>
<li>producer.ts</li>
</ul>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { Kafka } <span class="hljs-keyword">from</span> <span class="hljs-string">"kafkajs"</span>;

<span class="hljs-keyword">const</span> kafka = <span class="hljs-keyword">new</span> Kafka({
  <span class="hljs-attr">clientId</span>: <span class="hljs-string">"my-app"</span>,
  <span class="hljs-attr">brokers</span>: [<span class="hljs-string">"localhost:9092"</span>]
})

<span class="hljs-keyword">const</span> producer = kafka.producer();

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">main</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">await</span> producer.connect();
  <span class="hljs-keyword">await</span> producer.send({
    <span class="hljs-attr">topic</span>: <span class="hljs-string">"quickstart-events"</span>,
    <span class="hljs-attr">messages</span>: [{
      <span class="hljs-attr">value</span>: <span class="hljs-string">"hi there"</span>
    }]
  });
}


main();
</code></pre>
<ul>
<li>consumer.ts</li>
</ul>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { Kafka } <span class="hljs-keyword">from</span> <span class="hljs-string">"kafkajs"</span>;

<span class="hljs-keyword">const</span> kafka = <span class="hljs-keyword">new</span> Kafka({
  <span class="hljs-attr">clientId</span>: <span class="hljs-string">"my-app"</span>,
  <span class="hljs-attr">brokers</span>: [<span class="hljs-string">"localhost:9092"</span>]
})

<span class="hljs-keyword">const</span> consumer = kafka.consumer({ <span class="hljs-attr">groupId</span>: <span class="hljs-string">"my-app3"</span> });


<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">main</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">await</span> consumer.connect();
  <span class="hljs-keyword">await</span> consumer.subscribe({
    <span class="hljs-attr">topic</span>: <span class="hljs-string">"quickstart-events"</span>, <span class="hljs-attr">fromBeginning</span>: <span class="hljs-literal">true</span>
  })

  <span class="hljs-keyword">await</span> consumer.run({
    <span class="hljs-attr">eachMessage</span>: <span class="hljs-keyword">async</span> ({ topic, partition, message }) =&gt; {
      <span class="hljs-built_in">console</span>.log({
        <span class="hljs-attr">offset</span>: message.offset,
        <span class="hljs-attr">value</span>: message?.value?.toString(),
      })
    },
  })
}


main();
</code></pre>
<ul>
<li>Update package.json</li>
</ul>
<pre><code class="lang-javascript">
  <span class="hljs-string">"scripts"</span>: {
    <span class="hljs-string">"start"</span>: <span class="hljs-string">"tsc -b &amp;&amp; node dist/index.js"</span>,
    <span class="hljs-string">"produce"</span>: <span class="hljs-string">"tsc -b &amp;&amp; node dist/producer.js"</span>,
    <span class="hljs-string">"consume"</span>: <span class="hljs-string">"tsc -b &amp;&amp; node dist/consumer.js"</span>    
  },
</code></pre>
<ul>
<li>Try starting multiple consumers, and see if each gets back a message for the messages produced</li>
</ul>
<p>Notice we specified a <code>consumer group</code> (my-app3)</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2Fc0f866ab-9544-4ec7-bce6-c8818af57fed%2FScreenshot_2024-07-10_at_5.25.39_PM.png?table=block&amp;id=6ddd921a-7a16-4347-9686-432df1b22f6c&amp;cache=v2" alt="notion image" /></p>
<h1 id="heading-consumer-groups-and-partitions"><strong>Consumer groups and partitions</strong></h1>
<h4 id="heading-consumer-group"><strong>Consumer group</strong></h4>
<p>A consumer group is a group of consumers that coordinate to consume messages from a Kafka topic.</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2F2551afde-b3b2-4499-8c0c-5ac5db8a65a5%2FScreenshot_2024-07-10_at_5.28.10_PM.png?table=block&amp;id=f6a77bd2-5944-4d95-976c-8c1d2bb8f719&amp;cache=v2" alt="notion image" /></p>
<p><strong>Purpose:</strong></p>
<ul>
<li><strong>Load Balancing:</strong> Distribute the processing load among multiple consumers.</li>
</ul>
<ul>
<li><strong>Fault Tolerance:</strong> If one consumer fails, Kafka automatically redistributes the partitions that the failed consumer was handling to the remaining consumers in the group.</li>
</ul>
<ul>
<li><strong>Parallel Processing:</strong> Consumers in a group can process different partitions in parallel, improving throughput and scalability.</li>
</ul>
<h4 id="heading-partitions"><strong>Partitions</strong></h4>
<p>Partitions are subdivisions of a Kafka topic. Each partition is an ordered, immutable sequence of messages that is appended to by producers. Partitions enable Kafka to scale horizontally and allow for parallel processing of messages.</p>
<h4 id="heading-how-is-a-partition-decided"><strong>How is a partition decided?</strong></h4>
<p>When a message is produced to a Kafka topic, it is assigned to a specific partition. This can be done using a round-robin method, a hash of the message key, or a custom partitioning strategy.</p>
<p>Usually you’ll take things like <code>user id</code> as the <code>message key</code> so all messages from the same user go to the same consumer (so a single user doesnt starve everyone lets say)</p>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2F7fc0019b-0671-4b74-87b7-7c5e02f527dc%2FScreenshot_2024-07-10_at_5.34.47_PM.png?table=block&amp;id=83a339f3-8a03-4162-943a-641838776f51&amp;cache=v2" alt="notion image" /></p>
<h4 id="heading-multiple-consumer-groups"><strong>Multiple consumer groups</strong></h4>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2Fd55d31bc-6733-4b44-b29f-c068def20edc%2FScreenshot_2024-07-10_at_5.36.09_PM.png?table=block&amp;id=31961a0e-a0f8-4aef-b8e2-a146990ff7c6&amp;cache=v2" alt="notion image" /></p>
<h1 id="heading-three-cases-to-discuss"><strong>Three cases to discuss</strong></h1>
<h4 id="heading-equal-number-of-partitions-and-consumers"><strong>Equal number of partitions and consumers</strong></h4>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2F75478041-1808-4a32-b5b1-6468d3c5cd6e%2FScreenshot_2024-07-10_at_5.58.22_PM.png?table=block&amp;id=e21e3f03-27d8-424e-9e5c-92d79ff92f29&amp;cache=v2" alt="notion image" /></p>
<h4 id="heading-more-partitions"><strong>More partitions</strong></h4>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2F34cf697b-1d69-4106-897b-125a9939d7c7%2FScreenshot_2024-07-10_at_5.58.51_PM.png?table=block&amp;id=63bb2037-c8bd-4825-a16f-7287673bf35c&amp;cache=v2" alt="notion image" /></p>
<h4 id="heading-more-consumers"><strong>More consumers</strong></h4>
<p><img src="https://www.notion.so/image/https%3A%2F%2Fprod-files-secure.s3.us-west-2.amazonaws.com%2F085e8ad8-528e-47d7-8922-a23dc4016453%2Ff2060b34-80ea-41d5-9617-df65d189c86f%2FScreenshot_2024-07-10_at_5.59.05_PM.png?table=block&amp;id=371bde0a-fac2-4618-beb5-371df0145ed7&amp;cache=v2" alt="notion image" /></p>
<h1 id="heading-partitioning-strategy"><strong>Partitioning strategy</strong></h1>
<p>When producing messages, you can assign a key that uniquely identifies the event.</p>
<p>Kafka will hash this key and use the hash to determine the partition. This ensures that all messages with the same key (lets say for the same user) are sent to the same partition.</p>
<p>💡</p>
<p>Why would you want messages from the same user to go to the same partition? Lets say a single user has too many notifications, this way you can make sure they only choke a single partition and not all the partitions</p>
<ul>
<li>Create a new <code>producer-user.ts</code> file, pass in a <code>key</code> when producing the message</li>
</ul>
<pre><code class="lang-javascript"><span class="hljs-keyword">import</span> { Kafka } <span class="hljs-keyword">from</span> <span class="hljs-string">"kafkajs"</span>;

<span class="hljs-keyword">const</span> kafka = <span class="hljs-keyword">new</span> Kafka({
  <span class="hljs-attr">clientId</span>: <span class="hljs-string">"my-app"</span>,
  <span class="hljs-attr">brokers</span>: [<span class="hljs-string">"localhost:9092"</span>]
})

<span class="hljs-keyword">const</span> producer = kafka.producer();

<span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">main</span>(<span class="hljs-params"></span>) </span>{
  <span class="hljs-keyword">await</span> producer.connect();
  <span class="hljs-keyword">await</span> producer.send({
    <span class="hljs-attr">topic</span>: <span class="hljs-string">"payment-done"</span>,
    <span class="hljs-attr">messages</span>: [{
      <span class="hljs-attr">value</span>: <span class="hljs-string">"hi there"</span>,
      <span class="hljs-attr">key</span>: <span class="hljs-string">"user1"</span>
    }]
  });
}

main();
</code></pre>
<ul>
<li>Add <code>produce:user</code> script</li>
</ul>
<pre><code class="lang-javascript"><span class="hljs-string">"produce:user"</span>: <span class="hljs-string">"tsc -b &amp;&amp; node dist/producer-user.js"</span>,
</code></pre>
<ul>
<li>Start 3 consumers and one producer. Notice all messages reach the same consumer</li>
</ul>
<pre><code class="lang-javascript">npm run produce:user
</code></pre>
]]></content:encoded></item><item><title><![CDATA[What the heck is Cron?]]></title><description><![CDATA[Have you ever noticed promotional emails, marketing campaigns, or reminder messages landing in your inbox at the same time every day or week? If you’re a curious tech person like me, you might wonder what’s running behind the scenes. Is it a person h...]]></description><link>https://basir.devsomeware.com/what-the-heck-is-cron</link><guid isPermaLink="true">https://basir.devsomeware.com/what-the-heck-is-cron</guid><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Fri, 07 Nov 2025 18:30:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/sJHO8cJcgZc/upload/f77ac1f569ff66a2f204264afbcddd0c.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Have you ever noticed promotional emails, marketing campaigns, or reminder messages landing in your inbox at the same time every day or week? If you’re a curious tech person like me, you might wonder what’s running behind the scenes. Is it a person hitting “send”? A magic button?<br />Meet <strong>cron</strong> the (not scary) job scheduler.</p>
<p><strong>What is cron?</strong><br /><code>cron</code> is a Unix/Linux utility that schedules and runs tasks automatically at fixed times, dates, or intervals. A “cron job” is simply a command or script that cron runs on a schedule you define. People use cron for all kinds of repetitive automation: sending emails, generating reports, cleaning up temporary files, running backups, triggering data pipelines, and more.</p>
<p><strong>Why developers love cron (and its modern cousins)</strong></p>
<ul>
<li><p>It’s reliable: once scheduled, cron will run tasks at the right time without human intervention.</p>
</li>
<li><p>It’s simple: a single text file (the crontab) can express complex schedules.</p>
</li>
<li><p>It scales to many use cases: from simple daily cleanups to orchestrating big data workflows.<br />  That said, for complex workflows and dependencies, teams often use more advanced schedulers (e.g., Apache Airflow) or custom distributed schedulers. But cron remains the simplest, battle-tested tool for time-based automation.</p>
</li>
</ul>
<p><strong>Real-world uses (quick picture)</strong></p>
<ul>
<li><p>Sending promotional or reminder emails at scheduled times.</p>
</li>
<li><p>Running ETL and analytics jobs nightly.</p>
</li>
<li><p>Cleaning logs or rotating backups.</p>
</li>
<li><p>Triggering batch jobs, like invoice generation or report exports.</p>
</li>
</ul>
<h2 id="heading-the-starry-secret-of-cron-jobs">“The Starry Secret of Cron Jobs!”</h2>
<p>Ever seen something like <code>* * * * *</code> in a cron job and thought — “what the heck are all these stars doing here?” 😅<br />Well, each <code>*</code> (asterisk) represents a <strong>time field</strong> , <strong>minute, hour, day of month, month, and day of week</strong> — in that exact order. So when you see <code>* * * * *</code>, it literally means <strong>“run every minute of every hour of every day of every month, forever.”</strong></p>
<p>In simple words, it’s like telling your server:<br />💬 <em>“Bro, no breaks. Work every single minute — 24x7!”</em> 😆<br />You don’t get it naaa 😆 , I know here is the image for better understanding ?</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762537570579/c2b8349c-1c3c-4202-90fc-7a9c31ec09ac.png" alt class="image--center mx-auto" /></p>
<p>🧠 Let’s Be Serious Now - Time to Practice Some Cron Magic</p>
<p>Alright guys, enough of the theory talk! Let’s get our hands dirty and see how those little stars actually work. If you want to really <strong>grasp cron</strong>, you’ve got to <em>practice</em> these patterns instead of just reading them. So here are a few simple, real-life examples that’ll help you master cron expressions like a pro</p>
<h4 id="heading-1-every-day-at-3-pm">🕒 1. Every day at 3 PM</h4>
<pre><code class="lang-bash">0 15 * * *
</code></pre>
<p><strong>Meaning:</strong><br />The first <code>0</code> means <strong>start at the 0th minute</strong>, the <code>15</code> means <strong>the 15th hour (3 PM)</strong>, and the rest of the stars mean <strong>every day, every month, every weekday</strong>.<br />So this runs exactly once a day at <strong>3:00 PM</strong>.</p>
<h4 id="heading-2-every-day-at-1-30pm">🕐 2. Every day at 1 :30PM</h4>
<pre><code class="lang-bash">30 13 * * *
</code></pre>
<p><strong>Meaning:</strong><br />Run the task when the <strong>hour is 13 (1 PM)</strong> and <strong>minute is 30</strong> simple as that! Cron uses <strong>24-hour format</strong>, so 13 stands for 1 PM.</p>
<h4 id="heading-3-every-week-on-monday">📅 3. Every week on Monday</h4>
<pre><code class="lang-bash">0 9 * * 1
</code></pre>
<p><strong>Meaning:</strong><br />This runs at <strong>9:00 AM every Monday</strong>. The <code>1</code> at the end represents <strong>Monday</strong> (because cron counts days of the week as 0–6, where 0 = Sunday).</p>
<h4 id="heading-4-every-month-on-the-1st">🗓️ 4. Every month on the 1st</h4>
<pre><code class="lang-bash">0 10 1 * *
</code></pre>
<p><strong>Meaning:</strong><br />The <code>1</code> in the third position means <strong>the 1st day of every month</strong>, and <code>10</code> means <strong>10 AM</strong>.<br />So this runs once a month on the 1st day, at <strong>10:00 AM</strong> sharp.</p>
<hr />
<p>💡 <em>Tip:</em> Always remember the order <strong>minute, hour, day of month, month, day of week.</strong> Once you get that in your head, cron expressions will start making perfect sense.</p>
<h2 id="heading-lets-get-serious-guys-time-to-play-with-cron-like-a-real-one">Let’s Get Serious Guys - Time to Play with Cron Like a Real One</h2>
<p>Alright legends enough of the talking, it’s time to touch some code. You’ve read the stars, now let’s <em>make them shine.</em></p>
<p>We’re going to make cron print the current date every minute (so you can flex that “automation” muscle). After that, we’ll move to something more <em>Basir-level</em> automatic database backups every day at 2 PM. Yeah, prod stuff baby 😎</p>
<hr />
<h3 id="heading-step-1-get-into-the-machine">🧠 Step 1: Get into the Machine</h3>
<p>If you’re already on <strong>Ubuntu</strong>, cool. If not, spin one up anywhere - Docker, AWS, whatever you like.<br />I don’t care how, just get inside a terminal. You can’t learn cron by just reading this isn’t a fairytale, it’s tech.</p>
<p>Check if cron is alive:</p>
<pre><code class="lang-bash">sudo systemctl status cron
</code></pre>
<p>If it’s not active, just hit:</p>
<pre><code class="lang-bash">sudo systemctl start cron
</code></pre>
<p>Cron is like your background butler. He doesn’t speak, but he gets the job done quietly every time. 🕶️</p>
<hr />
<h3 id="heading-step-2-lets-make-cron-talk-print-date-every-minute">🕒 Step 2: Let’s Make Cron Talk — Print Date Every Minute</h3>
<p>Now fire this command:</p>
<pre><code class="lang-bash">crontab -e
</code></pre>
<p>Inside that file, drop this line:</p>
<pre><code class="lang-bash">* * * * * date &gt;&gt; ~/dates.txt
</code></pre>
<p>That’s it. Every single minute, this command will run <code>date</code> and append the output to <code>dates.txt</code>.<br />After 2–3 minutes, check it:</p>
<pre><code class="lang-bash">cat ~/dates.txt
</code></pre>
<p>If you see multiple timestamps there - congrats, you just made cron do your bidding. 😎<br />Every minute, like clockwork.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762539509650/3549ba21-b529-4df3-b726-29fb9c89d24c.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-a-little-self-love-moment">🧍‍♂️ A Little Self-Love Moment</h3>
<p>You’re on <strong>Basir’s Blog</strong>, my friend.<br />And Basir doesn’t just <em>read</em> commands - he <em>runs</em> them, breaks them, fixes them, and runs them again till it’s perfect. 😏<br />I’m a big believer in <em>doing prod things</em>, even when it’s just a demo. Because that’s how you become dangerous (in a good way).</p>
<hr />
<h2 id="heading-lets-do-some-prod-stuff-database-backup-every-day">💾 Let’s Do Some “Prod Stuff” — Database Backup Every Day</h2>
<p>Now we’re talking. Real-world example.<br />Let’s make cron handle database backups - because no one wants to be that engineer who forgot to back up.</p>
<hr />
<h3 id="heading-step-1-spin-up-mysql-in-docker">🐳 Step 1: Spin Up MySQL in Docker</h3>
<p>Run this command like a boss:</p>
<pre><code class="lang-bash">docker run -d \
  --name mysql-demo \
  -e MYSQL_ROOT_PASSWORD=rootpass \
  -e MYSQL_DATABASE=mydb \
  -p 3306:3306 \
  mysql:8
</code></pre>
<p>Quick breakdown:</p>
<ul>
<li><p><code>-d</code> → runs in background</p>
</li>
<li><p><code>--name</code> → gives your container a cool name</p>
</li>
<li><p><code>-e</code> → sets env variables (root password, db name)</p>
</li>
<li><p><code>-p</code> → connects MySQL’s port to your machine</p>
</li>
</ul>
<p>Check if it’s alive:</p>
<pre><code class="lang-bash">docker ps
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762539659346/55d05b35-729f-4d9b-963c-c6760abed9f2.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-step-2-add-some-data">🍽️ Step 2: Add Some Data</h3>
<p>We need data to back up, right? Let’s cook some up 🍳</p>
<p>Create a table:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -it mysql-demo mysql -uroot -prootpass -e \
<span class="hljs-string">"CREATE TABLE mydb.users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(50));"</span>
</code></pre>
<p>Insert fake data:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -i mysql-demo mysql -uroot -prootpass -D mydb -e <span class="hljs-string">"
INSERT INTO users (name) VALUES
('Alice'), ('Bob'), ('Charlie'), ('Diana'), ('Ethan');
"</span>
</code></pre>
<p>Verify it:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -it mysql-demo mysql -uroot -prootpass -D mydb -e <span class="hljs-string">"SELECT * FROM users;"</span>
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762539737926/f817cd57-790f-428b-b000-ab8d07de138f.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-step-3-manual-backup-just-to-feel-it">🧠 Step 3: Manual Backup — Just to Feel It</h3>
<p>Create a backup folder first:</p>
<pre><code class="lang-bash">sudo mkdir -p /backups
sudo chmod 777 /backups
</code></pre>
<p>Now dump the database:</p>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> mysql-demo mysqldump -uroot -prootpass mydb &gt; /backups/mysql_backup_$(date +%F).sql
</code></pre>
<p>Boom 💥 You just created your first SQL backup with today’s date in the filename.</p>
<hr />
<h3 id="heading-step-4-automate-like-a-boss">🕑 Step 4: Automate Like a Boss</h3>
<p>Alright, let’s make cron work for us again - this time, every night at <strong>2 AM</strong>.<br />Because while we sleep, cron grinds. 😴</p>
<p>Open crontab again:</p>
<pre><code class="lang-bash">crontab -e
</code></pre>
<p>Add this line:</p>
<pre><code class="lang-bash">0 2 * * * docker <span class="hljs-built_in">exec</span> mysql-demo mysqldump -uroot -prootpass mydb &gt; /backups/mysql_backup_$(date +\%F).sql
</code></pre>
<p>💡 Notice the <code>\%F</code> — the backslash tells cron, “Yo, don’t get confused, that’s just my date.”</p>
<p>If you’re too impatient to wait till 2 AM (like me waiting for a girl’s reply 💔), change it temporarily to:</p>
<pre><code class="lang-bash">*/2 * * * * sudo docker <span class="hljs-built_in">exec</span> mysql-demo mysqldump -uroot -prootpass mydb &gt; /backups/mysql_backup_$(date +\%F).sql
</code></pre>
<p>That’ll run every 2 minutes. Instant results, instant dopamine 😏</p>
<hr />
<h3 id="heading-step-5-clean-up-delete-old-backups">🧹 Step 5: Clean Up — Delete Old Backups</h3>
<p>Because no one wants a folder heavier than their ex’s emotional baggage 😬</p>
<p>Add this line to remove backups older than 7 days:</p>
<pre><code class="lang-bash">0 3 * * * find /backups -name <span class="hljs-string">"mysql_backup_*.sql"</span> -mtime +7 -delete
</code></pre>
<p>Boom. Fresh backups only. Every day. Clean and classy.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1762539977894/af2e17fe-897c-4e8a-89b9-0afcb4108a56.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-example-folder">🧩 Example Folder</h3>
<pre><code class="lang-bash">/backups/
 ├── mysql_backup_2025-11-05.sql
 ├── mysql_backup_2025-11-06.sql
 └── mysql_backup_2025-11-07.sql
</code></pre>
<hr />
<h3 id="heading-bonus-trick-compressed-backups">💣 Bonus Trick — Compressed Backups</h3>
<p>Want to save space and look cool doing it? Use gzip:</p>
<pre><code class="lang-bash">0 2 * * * docker <span class="hljs-built_in">exec</span> mysql-demo mysqldump -uroot -prootpass mydb | gzip &gt; /backups/mysql_backup_$(date +\%F).sql.gz
</code></pre>
<p>Now your backup’s smaller and faster like a code snippet that just got optimized.</p>
<p>Wait, Basir this is too raw, man! Where’s my favorite backend language, my tool, my code? Ohh baby, hold on I’ve got to bring everyone along on this ride. I don’t fall for syntactical sugar; I love those raw commands that make you feel like a real dev at 2 AM 😎. And guess what? Most of your favorite backend frameworks secretly do the same thing under the hood. You can use cron in any language you love trust me, you’re just <strong>one search away</strong> from making it happen!</p>
<hr />
<h3 id="heading-final-thought">⚡ Final Thought</h3>
<p>Cron is that quiet friend who never asks for credit but makes sure everything runs on time.<br />You sleep. Cron works. You forget. Cron remembers.</p>
<p>And that’s why I love it. It’s not just automation - it’s discipline in code form.<br />So go ahead - set it, forget it, and let cron handle your boring stuff while you build something epic.</p>
<p>Because remember… <strong>Basir doesn’t wait for miracles - he schedules them.</strong> 💥</p>
]]></content:encoded></item><item><title><![CDATA[Welcome to the Magic: Real-Time Updates with Server-Sent Events (SSE) in Node.js + React.js]]></title><description><![CDATA[In today’s web world, real-time communication is everywhere — whether you’re building dashboards, notifications, live metrics, or chat systems.When you hear “real-time,” most people think of WebSockets.
But there’s another lightweight, reliable alter...]]></description><link>https://basir.devsomeware.com/welcome-to-the-magic-real-time-updates-with-server-sent-events-sse-in-nodejs-reactjs</link><guid isPermaLink="true">https://basir.devsomeware.com/welcome-to-the-magic-real-time-updates-with-server-sent-events-sse-in-nodejs-reactjs</guid><category><![CDATA[serversentevents]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[react js]]></category><category><![CDATA[realtime]]></category><category><![CDATA[Redis]]></category><category><![CDATA[redis-cache]]></category><category><![CDATA[JavaScript]]></category><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Sun, 19 Oct 2025 16:46:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/PSpf_XgOM5w/upload/c8376e04ed345383092d75e3b15ec7e0.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In today’s web world, <strong>real-time communication</strong> is everywhere — whether you’re building dashboards, notifications, live metrics, or chat systems.<br />When you hear “real-time,” most people think of <strong>WebSockets</strong>.</p>
<p>But there’s another lightweight, reliable alternative that’s often overlooked —<br />💡 <strong>Server-Sent Events (SSE)</strong>.</p>
<p>In this blog, we’ll build a <strong>real-time data stream</strong> using <strong>Node.js (Express)</strong> on the backend and <strong>React.js</strong> on the frontend.<br />By the end, your browser will receive <strong>live updates</strong> from the server — no manual refreshes, no complex setup. Just pure streaming magic. ✨</p>
<hr />
<h2 id="heading-what-are-server-sent-events">🧠 What Are Server-Sent Events?</h2>
<p>Server-Sent Events (SSE) allow the <strong>server to push data</strong> to the client <strong>over a single HTTP connection</strong>.<br />The client subscribes to this stream and receives automatic updates — all using standard HTTP and no extra libraries.</p>
<p>Unlike <strong>WebSockets</strong>, SSE is:</p>
<ul>
<li><p>✅ <strong>Unidirectional</strong> (Server ➜ Client)</p>
</li>
<li><p>✅ <strong>Simple to implement</strong></p>
</li>
<li><p>✅ <strong>HTTP/1.1 friendly</strong></p>
</li>
<li><p>✅ <strong>Perfect for dashboards, logs, notifications</strong></p>
</li>
</ul>
<hr />
<h2 id="heading-how-sse-works-simple-concept">⚙️ How SSE Works (Simple Concept)</h2>
<ol>
<li><p>The <strong>browser</strong> opens a persistent connection to <code>/events</code>.</p>
</li>
<li><p>The <strong>server</strong> keeps the connection open and sends messages in this format:</p>
<pre><code class="lang-bash"> data: Hello Client!
 \n\n
</code></pre>
</li>
<li><p>The <strong>browser</strong> listens using the <code>EventSource</code> API.</p>
</li>
</ol>
<p>So when new data arrives, the client instantly updates — no refresh or request needed.</p>
<hr />
<h2 id="heading-step-1-setting-up-the-nodejs-express-server">🧩 Step 1: Setting Up the Node.js (Express) Server</h2>
<p>Let’s start by creating a new Node.js project.</p>
<pre><code class="lang-bash">mkdir sse-demo &amp;&amp; <span class="hljs-built_in">cd</span> sse-demo
npm init -y
npm install express cors
</code></pre>
<p>Now create a file named <code>server.js</code> 👇</p>
<pre><code class="lang-bash">import express from <span class="hljs-string">"express"</span>;
import cors from <span class="hljs-string">"cors"</span>;

const app = express();
app.use(cors());
const PORT = 4000;

// List of connected clients
const clients = [];

// ---------------------------
// 1️⃣ SSE Endpoint
// ---------------------------
app.get(<span class="hljs-string">"/events"</span>, (req, res) =&gt; {
  res.setHeader(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"text/event-stream"</span>);
  res.setHeader(<span class="hljs-string">"Cache-Control"</span>, <span class="hljs-string">"no-cache"</span>);
  res.setHeader(<span class="hljs-string">"Connection"</span>, <span class="hljs-string">"keep-alive"</span>);

  res.write(<span class="hljs-string">"data: Connected to SSE stream\n\n"</span>);

  // Add client connection
  clients.push(res);
  console.log(<span class="hljs-string">"Client connected, total:"</span>, clients.length);

  req.on(<span class="hljs-string">"close"</span>, () =&gt; {
    console.log(<span class="hljs-string">"Client disconnected"</span>);
    clients.splice(clients.indexOf(res), 1);
  });
});

// ---------------------------
// 2️⃣ Simulate Sending Data Every 3s
// ---------------------------
setInterval(() =&gt; {
  const message = {
    time: new Date().toISOString(),
    random: Math.floor(Math.random() * 100),
  };
  clients.forEach((client) =&gt;
    client.write(`data: <span class="hljs-variable">${JSON.stringify(message)}</span>\n\n`)
  );
}, 3000);

app.listen(PORT, () =&gt;
  console.log(`✅ SSE Server running on http://localhost:<span class="hljs-variable">${PORT}</span>`)
);
</code></pre>
<p>Now run it:</p>
<pre><code class="lang-bash">node server.js
</code></pre>
<hr />
<h2 id="heading-step-2-setting-up-the-react-client">🖥️ Step 2: Setting Up the React Client</h2>
<p>If you don’t already have a React app:</p>
<pre><code class="lang-bash">npx create-react-app sse-client
<span class="hljs-built_in">cd</span> sse-client
npm start
</code></pre>
<hr />
<h3 id="heading-create-ssestreamjsx">Create <code>SSEStream.jsx</code></h3>
<pre><code class="lang-bash">import React, { useEffect, useState } from <span class="hljs-string">"react"</span>;

const SSEStream = () =&gt; {
  const [messages, setMessages] = useState([]);

  useEffect(() =&gt; {
    const eventSource = new EventSource(<span class="hljs-string">"http://localhost:4000/events"</span>);

    eventSource.onmessage = (event) =&gt; {
      try {
        const data = JSON.parse(event.data);
        setMessages((prev) =&gt; [...prev, data]);
      } catch (e) {
        console.error(<span class="hljs-string">"Invalid data:"</span>, e);
      }
    };

    eventSource.onerror = (err) =&gt; {
      console.error(<span class="hljs-string">"SSE error:"</span>, err);
      eventSource.close();
    };

    <span class="hljs-built_in">return</span> () =&gt; {
      eventSource.close();
    };
  }, []);

  <span class="hljs-built_in">return</span> (
    &lt;div style={{ padding: 20, fontFamily: <span class="hljs-string">"monospace"</span> }}&gt;
      &lt;h2&gt;📡 Live Server-Sent Events&lt;/h2&gt;
      {messages.map((msg, i) =&gt; (
        &lt;div key={i}&gt;
          Time: {msg.time} | Random: {msg.random}
        &lt;/div&gt;
      ))}
    &lt;/div&gt;
  );
};

<span class="hljs-built_in">export</span> default SSEStream;
</code></pre>
<hr />
<h3 id="heading-add-it-in-appjs">Add It in <code>App.js</code></h3>
<pre><code class="lang-bash">import React from <span class="hljs-string">"react"</span>;
import SSEStream from <span class="hljs-string">"./SSEStream"</span>;

<span class="hljs-keyword">function</span> <span class="hljs-function"><span class="hljs-title">App</span></span>() {
  <span class="hljs-built_in">return</span> (
    &lt;div&gt;
      &lt;h1&gt;🚀 Real-Time Data Stream (SSE + React)&lt;/h1&gt;
      &lt;SSEStream /&gt;
    &lt;/div&gt;
  );
}

<span class="hljs-built_in">export</span> default App;
</code></pre>
<hr />
<h2 id="heading-step-3-make-it-more-real-integrating-redis">🧠 Step 3: Make It More Real — Integrating Redis</h2>
<p>Now imagine your data is stored in Redis.<br />You want to push updates to clients <strong>whenever Redis changes</strong> — without polling.</p>
<p>That’s where <strong>Redis Pub/Sub</strong> shines ✨</p>
<h3 id="heading-install-redis-package">Install Redis Package</h3>
<pre><code class="lang-bash">npm install redis
</code></pre>
<h3 id="heading-updated-server-serverjs">Updated Server (<code>server.js</code>)</h3>
<pre><code class="lang-bash">import express from <span class="hljs-string">"express"</span>;
import cors from <span class="hljs-string">"cors"</span>;
import { createClient } from <span class="hljs-string">"redis"</span>;

const app = express();
app.use(cors());
const PORT = 4000;

// Redis setup
const redisClient = createClient();
const subscriber = redisClient.duplicate();
await redisClient.connect();
await subscriber.connect();

const clients = [];

app.get(<span class="hljs-string">"/events"</span>, (req, res) =&gt; {
  res.setHeader(<span class="hljs-string">"Content-Type"</span>, <span class="hljs-string">"text/event-stream"</span>);
  res.setHeader(<span class="hljs-string">"Cache-Control"</span>, <span class="hljs-string">"no-cache"</span>);
  res.setHeader(<span class="hljs-string">"Connection"</span>, <span class="hljs-string">"keep-alive"</span>);
  clients.push(res);
  req.on(<span class="hljs-string">"close"</span>, () =&gt; clients.splice(clients.indexOf(res), 1));
});

await subscriber.subscribe(<span class="hljs-string">"updates"</span>, (message) =&gt; {
  console.log(<span class="hljs-string">"Redis message:"</span>, message);
  clients.forEach((res) =&gt; res.write(`data: <span class="hljs-variable">${message}</span>\n\n`));
});

app.listen(PORT, () =&gt; console.log(`✅ SSE Server running on <span class="hljs-variable">${PORT}</span>`));
</code></pre>
<h3 id="heading-publisher-example">Publisher Example</h3>
<pre><code class="lang-bash">// publisher.js
import { createClient } from <span class="hljs-string">"redis"</span>;
const publisher = createClient();
await publisher.connect();

setInterval(async () =&gt; {
  const data = JSON.stringify({
    time: new Date().toISOString(),
    value: Math.floor(Math.random() * 1000),
  });
  await publisher.publish(<span class="hljs-string">"updates"</span>, data);
  console.log(<span class="hljs-string">"Published:"</span>, data);
}, 3000);
</code></pre>
<p>Now every time your Redis publisher emits a message, <strong>React instantly sees it</strong> 🔥</p>
<hr />
<h2 id="heading-when-to-use-sse">🎯 When to Use SSE</h2>
<p>✅ Perfect for:</p>
<ul>
<li><p>Live analytics dashboards</p>
</li>
<li><p>Real-time logs or monitoring</p>
</li>
<li><p>Notification systems</p>
</li>
<li><p>Stock prices / crypto price feeds</p>
</li>
<li><p>System health monitoring</p>
</li>
</ul>
<p>❌ Not ideal for:</p>
<ul>
<li><p>Two-way chat (use WebSockets)</p>
</li>
<li><p>Heavy concurrent updates</p>
</li>
</ul>
<hr />
<h2 id="heading-final-thoughts">💬 Final Thoughts</h2>
<p>Server-Sent Events (SSE) are an elegant, HTTP-friendly way to stream updates <strong>from your server to browsers</strong>.<br />They’re simpler than WebSockets, and with Redis Pub/Sub, they scale beautifully in production.</p>
<p>In this tutorial, we:</p>
<ul>
<li><p>Built an <strong>Express server</strong> streaming live data</p>
</li>
<li><p>Connected a <strong>React frontend</strong> using <code>EventSource</code></p>
</li>
<li><p>Enhanced it with <strong>Redis Pub/Sub</strong> for distributed real-time updates</p>
</li>
</ul>
<p>Now you can build real-time dashboards, live notifications, or any app that thrives on <strong>continuous data flow</strong> ⚡</p>
<hr />
<h2 id="heading-bonus-repo-structure-example">🔗 Bonus: Repo Structure Example</h2>
<pre><code class="lang-bash">sse-demo/
│
├── server.js           <span class="hljs-comment"># Express + Redis + SSE server</span>
├── publisher.js        <span class="hljs-comment"># Redis publisher</span>
└── sse-client/         <span class="hljs-comment"># React frontend</span>
    ├── src/
    │   ├── App.js
    │   └── SSEStream.jsx
</code></pre>
]]></content:encoded></item><item><title><![CDATA[Setting up CI/CD Through Github Actions]]></title><description><![CDATA[Step 1 — Generate a CI/CD-friendly SSH key on your EC2
Open your EC2 terminal and run:
# Generate a new SSH key specifically for GitHub Actions
ssh-keygen -t rsa -b 4096 -m PEM -C "github-actions" -f ~/github_ec2_ci_key -N ""

Explanation:

-t rsa → ...]]></description><link>https://basir.devsomeware.com/setting-up-cicd-through-github-actions</link><guid isPermaLink="true">https://basir.devsomeware.com/setting-up-cicd-through-github-actions</guid><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Sat, 18 Oct 2025 12:44:57 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762587435744/01125b42-26d5-4d91-b389-b303dd325710.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-step-1-generate-a-cicd-friendly-ssh-key-on-your-ec2"><strong>Step 1 — Generate a CI/CD-friendly SSH key on your EC2</strong></h1>
<p>Open your EC2 terminal and run:</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Generate a new SSH key specifically for GitHub Actions</span>
ssh-keygen -t rsa -b 4096 -m PEM -C <span class="hljs-string">"github-actions"</span> -f ~/github_ec2_ci_key -N <span class="hljs-string">""</span>
</code></pre>
<p>Explanation:</p>
<ul>
<li><p><code>-t rsa</code> → RSA key type</p>
</li>
<li><p><code>-b 4096</code> → 4096-bit for strong encryption</p>
</li>
<li><p><code>-m PEM</code> → ensures compatibility (some tools like GitHub Actions need PEM format)</p>
</li>
<li><p><code>-C "github-actions"</code> → a comment for identification</p>
</li>
<li><p><code>-f ~/github_ec2_ci_key</code> → saves key in your home directory</p>
</li>
<li><p><code>-N ""</code> → <strong>no passphrase</strong> (required for CI/CD)</p>
</li>
</ul>
<p>This will create <strong>two files</strong>:</p>
<pre><code class="lang-bash">~/github_ec2_ci_key      → private key
~/github_ec2_ci_key.pub  → public key
</code></pre>
<hr />
<h1 id="heading-step-2-add-the-public-key-to-authorizedkeys"><strong>Step 2 — Add the public key to authorized_keys</strong></h1>
<pre><code class="lang-bash"><span class="hljs-comment"># Ensure the .ssh directory exists</span>
mkdir -p ~/.ssh

<span class="hljs-comment"># Append the public key to authorized_keys</span>
cat ~/github_ec2_ci_key.pub &gt;&gt; ~/.ssh/authorized_keys

<span class="hljs-comment"># Set proper permissions</span>
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
</code></pre>
<p>✅ Now your EC2 <strong>trusts this key</strong> for SSH login.</p>
<hr />
<h1 id="heading-step-3-test-ssh-locally"><strong>Step 3 — Test SSH locally</strong></h1>
<pre><code class="lang-bash">ssh -i ~/github_ec2_ci_key ubuntu@&lt;EC2_PUBLIC_IP&gt;
</code></pre>
<ul>
<li><p>You should log in <strong>without typing a passphrase</strong>.</p>
</li>
<li><p>If it works, your key setup is correct.</p>
</li>
</ul>
<hr />
<h1 id="heading-step-4-copy-private-key-to-github-secrets"><strong>Step 4 — Copy private key to GitHub Secrets</strong></h1>
<ol>
<li>Run:</li>
</ol>
<pre><code class="lang-bash">cat ~/github_ec2_ci_key
</code></pre>
<ol start="2">
<li>Copy everything starting from:</li>
</ol>
<pre><code class="lang-bash">-----BEGIN RSA PRIVATE KEY-----
</code></pre>
<p>…to:</p>
<pre><code class="lang-bash">-----END RSA PRIVATE KEY-----
</code></pre>
<ol start="3">
<li><p>Go to GitHub → <strong>Settings → Secrets → Actions → New repository secret</strong></p>
</li>
<li><p>Name it: <code>EC2_KEY</code></p>
</li>
<li><p>Paste the private key there</p>
</li>
</ol>
<blockquote>
<p><strong>No passphrase needed</strong>, so you don’t need a <code>passphrase:</code> field in GitHub Actions.</p>
</blockquote>
<hr />
<h1 id="heading-step-5-create-github-actions-workflow"><strong>Step 5 — Create GitHub Actions workflow</strong></h1>
<p>Create <code>.github/workflows/deploy.yml</code> in your repository:</p>
<pre><code class="lang-bash">name: Deploy to EC2

on:
  push:
    branches:
      - main

<span class="hljs-built_in">jobs</span>:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Deploy to EC2
        uses: appleboy/ssh-action@v0.1.9
        with:
          host: <span class="hljs-variable">${{ secrets.EC2_HOST }</span>}        <span class="hljs-comment"># Your EC2 public IP</span>
          username: <span class="hljs-variable">${{ secrets.EC2_USER }</span>}    <span class="hljs-comment"># Typically 'ubuntu'</span>
          key: <span class="hljs-variable">${{ secrets.EC2_KEY }</span>}          <span class="hljs-comment"># Private key secret</span>
          port: 22
          script: |
            <span class="hljs-built_in">cd</span> /var/www/artistic_backend
            git pull origin main
            npm install
            npm run build
            pm2 restart all
</code></pre>
<h2 id="heading-or">OR</h2>
<pre><code class="lang-bash">name: Deploy to EC2

on:
  push:
    branches:
      - master

<span class="hljs-built_in">jobs</span>:
  deploy:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v3

      - name: Deploy to EC2
        uses: appleboy/ssh-action@v0.1.9
        with:
          host: <span class="hljs-variable">${{ secrets.EC2_HOST }</span>}
          username: <span class="hljs-variable">${{ secrets.EC2_USER }</span>}
          key: <span class="hljs-variable">${{ secrets.EC2_KEY }</span>}
          port: 22
          script: |
            <span class="hljs-comment"># Load Node.js environment (handles nvm or global install)</span>
            <span class="hljs-keyword">if</span> [ -f ~/.nvm/nvm.sh ]; <span class="hljs-keyword">then</span>
              <span class="hljs-built_in">echo</span> <span class="hljs-string">"Using nvm environment"</span>
              <span class="hljs-built_in">source</span> ~/.nvm/nvm.sh
              nvm use node
            <span class="hljs-keyword">else</span>
              <span class="hljs-built_in">echo</span> <span class="hljs-string">"Using system-wide Node.js"</span>
              <span class="hljs-built_in">export</span> PATH=<span class="hljs-variable">$PATH</span>:/usr/<span class="hljs-built_in">local</span>/bin:/usr/bin
            <span class="hljs-keyword">fi</span>

            <span class="hljs-built_in">cd</span> /home/ubuntu/quick-test/backend
            <span class="hljs-built_in">echo</span> <span class="hljs-string">"Pulling latest code..."</span>
            git pull origin master

            <span class="hljs-built_in">echo</span> <span class="hljs-string">"Installing dependencies..."</span>
            npm install

            <span class="hljs-built_in">echo</span> <span class="hljs-string">"Building TypeScript..."</span>
            npx tsc -b

            <span class="hljs-built_in">echo</span> <span class="hljs-string">"Restarting PM2 process..."</span>
            pm2 restart all || pm2 start ecosystem.config.js --update-env
</code></pre>
<hr />
<h1 id="heading-step-6-secrets-you-need-in-github"><strong>Step 6 — Secrets you need in GitHub</strong></h1>
<ul>
<li><p><code>EC2_HOST</code> → your EC2 public IP</p>
</li>
<li><p><code>EC2_USER</code> → usually <code>ubuntu</code></p>
</li>
<li><p><code>EC2_KEY</code> → the private key you copied</p>
</li>
</ul>
<p>No passphrase is required because the key is unencrypted.</p>
<hr />
<h1 id="heading-optional-remove-passphrase-from-an-existing-key">✅ <strong>Optional: Remove passphrase from an existing key</strong></h1>
<p>If you already generated a key <strong>with a passphrase</strong>, you can remove it:</p>
<pre><code class="lang-bash">ssh-keygen -p -f ~/existing_key
<span class="hljs-comment"># Enter current passphrase</span>
<span class="hljs-comment"># For new passphrase: press ENTER</span>
<span class="hljs-comment"># Confirm: press ENTER</span>
</code></pre>
<hr />
<h1 id="heading-step-7-verify-workflow"><strong>Step 7 — Verify workflow</strong></h1>
<ol>
<li><p>Push to the <code>main</code> branch.</p>
</li>
<li><p>GitHub Actions will:</p>
</li>
</ol>
<ul>
<li><p>Checkout the code</p>
</li>
<li><p>SSH into your EC2</p>
</li>
<li><p>Pull latest code, install dependencies, build, and restart PM2</p>
</li>
</ul>
<p>No more <code>ssh: handshake failed</code> errors because:</p>
<ul>
<li><p>The key is trusted by EC2</p>
</li>
<li><p>The private key is unencrypted</p>
</li>
<li><p>GitHub Actions can use it in Docker without needing a passphrase</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Valkey Installation on ubuntu]]></title><description><![CDATA[Install Docker
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo inst...]]></description><link>https://basir.devsomeware.com/valkey-installation-on-ubuntu</link><guid isPermaLink="true">https://basir.devsomeware.com/valkey-installation-on-ubuntu</guid><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Fri, 17 Oct 2025 20:01:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1762587548853/32038a48-5688-4471-ae8c-bcb1806b231b.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-install-docker">Install Docker</h2>
<pre><code class="lang-bash"><span class="hljs-keyword">for</span> pkg <span class="hljs-keyword">in</span> docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; <span class="hljs-keyword">do</span> sudo apt-get remove <span class="hljs-variable">$pkg</span>; <span class="hljs-keyword">done</span>
</code></pre>
<pre><code class="lang-bash"><span class="hljs-comment"># Add Docker's official GPG key:</span>
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

<span class="hljs-comment"># Add the repository to Apt sources:</span>
<span class="hljs-built_in">echo</span> \
  <span class="hljs-string">"deb [arch=<span class="hljs-subst">$(dpkg --print-architecture)</span> signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  <span class="hljs-subst">$(. /etc/os-release &amp;&amp; echo <span class="hljs-string">"<span class="hljs-variable">${UBUNTU_CODENAME:-<span class="hljs-variable">$VERSION_CODENAME</span>}</span>"</span>)</span> stable"</span> | \
  sudo tee /etc/apt/sources.list.d/docker.list &gt; /dev/null
sudo apt-get update
</code></pre>
<pre><code class="lang-bash">sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
</code></pre>
<p>Check docker installation status</p>
<pre><code class="lang-bash">sudo systemctl status docker
</code></pre>
<h2 id="heading-set-the-conf">Set the conf</h2>
<pre><code class="lang-bash">sudo rm -rf /data/valkey/valkey.conf
sudo nano /data/valkey/valkey.conf
</code></pre>
<h3 id="heading-paste-the-conf-make-sure-to-edit-the-password">Paste the conf make sure to edit the password</h3>
<pre><code class="lang-bash"><span class="hljs-built_in">bind</span> 0.0.0.0
port 6379
protected-mode yes

requirepass StrongP@ssw0rd!

save 60 1
appendonly yes
appendfilename <span class="hljs-string">"appendonly.aof"</span>
appendfsync everysec

dir /data

logfile /data/valkey.log
loglevel notice

maxmemory 1gb
maxmemory-policy allkeys-lru
</code></pre>
<p>Check is it good or not  </p>
<pre><code class="lang-bash">ls -l /data/valkey/valkey.conf
</code></pre>
<pre><code class="lang-bash">-rw-r--r-- 1 ubuntu ubuntu 350 Oct 18 16:10 /data/valkey/valkey.conf
</code></pre>
<h2 id="heading-run-the-container">Run the container</h2>
<pre><code class="lang-bash">sudo docker run -d \
  --name valkey \
  -p 6379:6379 \
  -v /data/valkey:/data \
  -v /data/valkey/valkey.conf:/etc/valkey/valkey.conf:ro \
  --restart unless-stopped \
  valkey/valkey:latest \
  valkey-server /etc/valkey/valkey.conf
</code></pre>
<p>Check all good</p>
<pre><code class="lang-bash">sudo tail -f /data/valkey/valkey.log
</code></pre>
<p>Check<br />Ready to accept connections<br />this message should come up</p>
<h2 id="heading-conf-this-if-you-think-that-kernel-should-allocate-as-much-they-can">conf this if you think that kernel should allocate as much they can</h2>
<p><strong>1️⃣</strong> <code>maxmemory = 1 GB</code></p>
<ul>
<li><p>This is a <strong>hard cap</strong> inside ValKey.</p>
</li>
<li><p>Redis <strong>will never store more than 1 GB of data in memory</strong>, even if your machine has 4 GB free.</p>
</li>
<li><p>Eviction (<code>allkeys-lru</code>) will kick in once that limit is reached.</p>
</li>
<li><p><strong>Effect:</strong> Redis memory usage is limited to 1 GB for keys/data.</p>
</li>
</ul>
<hr />
<h3 id="heading-2-vmovercommitmemory1"><strong>2️⃣</strong> <code>vm.overcommit_memory=1</code></h3>
<ul>
<li><p>This is <strong>a Linux-level setting</strong>, not a ValKey setting.</p>
</li>
<li><p>Linux allows Redis to request memory <strong>without being blocked</strong>, even if the kernel thinks the system might run out of RAM.</p>
</li>
<li><p><strong>Important:</strong> This does <strong>not change the</strong> <code>maxmemory</code> limit inside Redis.</p>
</li>
</ul>
<p>So in your case:</p>
<ul>
<li><p>ValKey will <strong>still only store 1 GB</strong> of data (because of <code>maxmemory=1gb</code>).</p>
</li>
<li><p>Linux may allocate slightly more than 1 GB to Redis internally for <strong>overhead, buffers, bookkeeping</strong>, without killing the process.</p>
</li>
<li><p>Redis <strong>cannot “go beyond 1 GB for data”</strong> just because <code>overcommit=1</code> is set.  </p>
</li>
</ul>
<pre><code class="lang-bash"><span class="hljs-comment"># Temporary (until next reboot)</span>
sudo sysctl vm.overcommit_memory=1

<span class="hljs-comment"># Permanent</span>
<span class="hljs-built_in">echo</span> <span class="hljs-string">"vm.overcommit_memory=1"</span> | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
</code></pre>
<h2 id="heading-here-is-your-password">Here is your password</h2>
<p>make sure to allow 6379 on security group as in bound rule  </p>
<pre><code class="lang-bash">redis://default:&lt;pass&gt;@&lt;EC2_PRIVATE_OR_PUBLIC_IP&gt;:6379
</code></pre>
<h2 id="heading-test-command-in-cli">Test command in cli</h2>
<pre><code class="lang-bash">docker <span class="hljs-built_in">exec</span> -it valkey valkey-cli
&gt; auth StrongP@ssw0rd!
OK
&gt; <span class="hljs-built_in">set</span> <span class="hljs-built_in">test</span> <span class="hljs-string">"persistent"</span>
OK
&gt; get <span class="hljs-built_in">test</span>
<span class="hljs-string">"persistent"</span>
</code></pre>
<h2 id="heading-node-js-client-sample-code">Node js Client sample code</h2>
<pre><code class="lang-bash">// index.js
import { createClient } from <span class="hljs-string">'redis'</span>;

async <span class="hljs-keyword">function</span> <span class="hljs-function"><span class="hljs-title">main</span></span>() {
  // Replace &lt;EC2_IP&gt; with your EC2 public/private IP
  const client = createClient({
    url: <span class="hljs-string">'redis://default:StrongP@ssw0rd!@&lt;EC2_IP&gt;:6379'</span>
  });

  client.on(<span class="hljs-string">'error'</span>, (err) =&gt; console.error(<span class="hljs-string">'Redis Client Error'</span>, err));

  try {
    await client.connect();
    console.log(<span class="hljs-string">'✅ Connected to ValKey successfully!'</span>);

    // Set a key with expiry of 60 seconds
    await client.set(<span class="hljs-string">'test-key'</span>, <span class="hljs-string">'Hello ValKey!'</span>, { EX: 60 });
    console.log(<span class="hljs-string">'✅ Key "test-key" set with 60 seconds expiry'</span>);

    // Get the key
    const value = await client.get(<span class="hljs-string">'test-key'</span>);
    console.log(<span class="hljs-string">'🔹 Retrieved value:'</span>, value);

  } catch (err) {
    console.error(<span class="hljs-string">'Connection failed:'</span>, err);
  } finally {
    await client.quit();
    console.log(<span class="hljs-string">'Connection closed'</span>);
  }
}

main();
</code></pre>
<pre><code class="lang-bash">npm install redis
node index.js
</code></pre>
<pre><code class="lang-bash">✅ Connected to ValKey successfully!
✅ Key <span class="hljs-string">"test-key"</span> <span class="hljs-built_in">set</span> with 60 seconds expiry
🔹 Retrieved value: Hello ValKey!
Connection closed
</code></pre>
<h3 id="heading-bottom-line">🔹 Bottom line</h3>
<p>Your current setup is <strong>okay for small production workloads</strong> or a single-app environment:</p>
<ul>
<li><p>One EC2 instance</p>
</li>
<li><p>Domain-based access</p>
</li>
<li><p>Persistent storage via EBS</p>
</li>
<li><p>Password protection</p>
</li>
</ul>
<p>…but for <strong>enterprise-level production</strong> with high availability, failover, and monitoring, you’ll need to implement:</p>
<ul>
<li><p>Clustered ValKey / Redis replication</p>
</li>
<li><p>TLS/SSL connections</p>
</li>
<li><p>Monitoring and alerting</p>
</li>
<li><p>Automated backups</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[My Own Dns Server]]></title><description><![CDATA[How to use ?
For calculation:-
dig @dns.deploylite.tech MX calculate.2+2
#feel free to change 2+2 to something else 
dig @dns.deploylite.tech MX calculate.2+2 +short

For generating random number:-
dig @dns.deploylite.tech MX generate-random.rand
#us...]]></description><link>https://basir.devsomeware.com/my-own-dns-server</link><guid isPermaLink="true">https://basir.devsomeware.com/my-own-dns-server</guid><category><![CDATA[dns]]></category><category><![CDATA[Devops]]></category><category><![CDATA[AWS]]></category><category><![CDATA[ec2]]></category><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Sun, 05 Jan 2025 11:12:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1736075335457/f28e324b-4965-4dcb-91fe-22914e3bf40e.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-how-to-use">How to use ?</h2>
<p>For calculation:-</p>
<pre><code class="lang-bash">dig @dns.deploylite.tech MX calculate.2+2
<span class="hljs-comment">#feel free to change 2+2 to something else </span>
dig @dns.deploylite.tech MX calculate.2+2 +short
</code></pre>
<p>For generating random number:-</p>
<pre><code class="lang-bash">dig @dns.deploylite.tech MX generate-random.rand
<span class="hljs-comment">#using +short</span>
dig @dns.deploylite.tech MX generate-random.rand +short
</code></pre>
<p>For getting any timezone:-  </p>
<pre><code class="lang-bash">dig @dns.deploylite.tech MX timezone.ASIA/KOLKATA
<span class="hljs-comment">#using +short</span>
dig @dns.deploylite.tech MX timezone.ASIA/KOLKATA +short
dig @dns.deploylite.tech MX timezone.US/Pacific +short
dig @dns.deploylite.tech MX timezone.Singapore +short
<span class="hljs-comment">#you can use any time zone for the query?</span>
<span class="hljs-comment">#for supported timezone visist:-https://timeapi.io/api/timezone/availabletimezones</span>
</code></pre>
<p>For ai response . Ask anything to ai:-  </p>
<pre><code class="lang-bash">dig @dns.deploylite.tech MX ai.what.is.dns +short
<span class="hljs-comment">#Modify this query . And if you have space on your prompt add .</span>
dig @dns.deploylite.tech MX ai.what.is.js +short
</code></pre>
<p>For Piyush Sir's Playlist recommendation: Ask your query about the resources you need for a tutorial, and you'll get a full-fledged playlist curated by Piyush Sir:-</p>
<pre><code class="lang-bash">dig @dns.deploylite.tech MX tutorial.nextjs +short
dig @dns.deploylite.tech MX tutorial.advanced.js +short
<span class="hljs-comment">#feel free to ask what do you want.</span>
dig @dns.deploylite.tech MX tutorial.appwrite +short
<span class="hljs-comment">#gives and not found message if it's not there</span>
</code></pre>
<p>For Piyush Sir's Course recommendation: Ask your query:-</p>
<pre><code class="lang-bash">dig @dns.deploylite.tech MX course.docker +short
dig @dns.deploylite.tech MX course.nextjs +short
<span class="hljs-comment">#feel free to ask of your choice</span>
dig @dns.deploylite.tech MX course.web.dev.cohort +short
</code></pre>
<hr />
<p>Currently, these features are available, with more exciting features coming soon.</p>
<p>Here is a step by step guide how to deploy your own dns server .</p>
<h2 id="heading-how-to-deploy">How to Deploy ?</h2>
<ol>
<li>Start an ec2 instance and below run the commands to cotinue</li>
</ol>
<p>allow 53 port in security group - &gt; custom udp → 53 → allow access from anywhere</p>
<pre><code class="lang-bash">sudo apt update &amp;&amp; upgrade -y
</code></pre>
<p>Install nodejs</p>
<pre><code class="lang-bash">curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
<span class="hljs-built_in">source</span> ~/.bashrc
nvm install v21.7.0
</code></pre>
<p>Clone Your repo</p>
<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> repo_url
<span class="hljs-built_in">cd</span> repo_folder
npm i
</code></pre>
<p>Basic Configuration for opening port 53. I see that systemd-resolved is using port 53. Let's disable it and free up the port:</p>
<ol>
<li>First, stop and disable systemd-resolved:</li>
</ol>
<pre><code class="lang-bash">sudo systemctl stop systemd-resolved
sudo systemctl <span class="hljs-built_in">disable</span> systemd-resolved
</code></pre>
<p>You might need to update your DNS resolver settings. Edit <code>/etc/resolv.conf</code>:</p>
<pre><code class="lang-bash">sudo nano /etc/resolv.conf
</code></pre>
<p>Replace its contents with:</p>
<pre><code class="lang-bash">nameserver 8.8.8.8  <span class="hljs-comment"># Google DNS</span>
nameserver 8.8.4.4  <span class="hljs-comment"># Google DNS backup</span>
</code></pre>
<p>To prevent systemd-resolved from starting again on reboot:</p>
<pre><code class="lang-bash">sudo systemctl mask systemd-resolved
</code></pre>
<p>If you get permission errors, make sure you've set the capabilities correctly:</p>
<pre><code class="lang-bash">sudo <span class="hljs-built_in">setcap</span> cap_net_bind_service=+ep $(<span class="hljs-built_in">which</span> node)
</code></pre>
<p>Start the dns server on background. Install Pm2.</p>
<pre><code class="lang-bash">npm i -g pm2
</code></pre>
<pre><code class="lang-bash">pm2 start <span class="hljs-string">"node index.js"</span> --name dns
</code></pre>
<pre><code class="lang-bash">pm2 save
</code></pre>
<p><strong>Congratulations Your Server is up and running.</strong></p>
<hr />
]]></content:encoded></item><item><title><![CDATA[Create ,Attach and mount EBS volume on ec2 instance]]></title><description><![CDATA[Start ec2 instatance or launch a fresh one.
 


    2. Run lsblk you can see the disk is attached


 Create Volumes for this use ebs . for this tutorial i am creatimg 3 volumes
 
 Attach the volume


 swith to roo

run lvm command

to see pjysical st...]]></description><link>https://basir.devsomeware.com/create-attach-and-mount-ebs-volume-on-ec2-instance</link><guid isPermaLink="true">https://basir.devsomeware.com/create-attach-and-mount-ebs-volume-on-ec2-instance</guid><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Sun, 05 Jan 2025 10:13:47 GMT</pubDate><content:encoded><![CDATA[<ol>
<li><p>Start ec2 instatance or launch a fresh one.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733293023356/17ce8bf1-f5da-4753-8bc8-5bf63f344c11.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<p>    2. Run lsblk you can see the disk is attached</p>
<ol start="2">
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733293280674/08cd4ee7-f846-4d61-a21c-22d6014a1feb.png" alt class="image--center mx-auto" /></p>
<p> Create Volumes for this use ebs . for this tutorial i am creatimg 3 volumes</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733293410086/fa01ed6a-b41a-4b34-b12a-ec604fdd4d97.png" alt class="image--center mx-auto" /></p>
<p> Attach the volume</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1733293668622/7e8617ed-8040-42b7-b867-b27cfa3be779.png" alt class="image--center mx-auto" /></p>
<p> swith to roo</p>
</li>
<li><p>run lvm command</p>
</li>
<li><p>to see pjysical storage run “pvs“</p>
</li>
<li><p>create a physical volume by this command : “pvcreate /dev/xvdg /dev/xvgh “</p>
</li>
<li><p>run pvs command to show the physical voume</p>
</li>
<li><p>create a volume group</p>
</li>
<li><p>vgcreate basir_vol_grp /dev/xvdf /dev/xvdg by running this command</p>
</li>
<li><p>run vgs to show volume group</p>
</li>
<li><p>create a logical voulune on top of it</p>
</li>
<li><p>lvcreate -L 10G -n basir_vol basir_vol_grp by running this command</p>
</li>
<li><p>run pvdisplay to get all pjysical volume information</p>
</li>
<li><p>run lsblk to see logical volume is there or not</p>
</li>
<li><p>mount this volumes</p>
</li>
<li><p>if you run df -h to see all of this not mounted</p>
</li>
<li><p>create a mounting folder</p>
</li>
<li><p>mkdir /mnt/basirfolder by running this command</p>
</li>
<li><p>format logical volume</p>
</li>
<li><p>mkfs.ext4 /dev/basir_vol_grp/basir_vol by running this comand</p>
</li>
<li><p>mounting volume</p>
</li>
<li><p>mount /dev/basir_vol_grp/basir_vol /mnt/basirfolder by running this command</p>
</li>
<li><p>run df -h to see it is properly mounted or not</p>
</li>
<li><p>for unmounting run this command umount /mnt/basirfolder</p>
</li>
<li><p>one good question is data lost or not if i unmount</p>
</li>
<li><p>answer is mount again its back again try it at your end</p>
</li>
<li><p>for mounting afresh ebs to a linux_ins</p>
</li>
<li><p>create a mounting folder mkdir /mnt/basiraws</p>
</li>
<li><p>formating ebs voulume by running this command : mkfs -t ext4 /dev/xvdh</p>
</li>
<li><p>mount this using mount /dev/xvdh /mnt/basiraws</p>
</li>
<li><p>want to extend logical volume</p>
</li>
<li><p>lvextend -L +5G /dev/basir_vol_grp/basir_vol by running this command</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[How to Host a React Website on AWS S3 and Speed it Up with CloudFront]]></title><description><![CDATA[In this guide, we'll explore how to host your React application on Amazon S3 and use CloudFront to speed up content delivery to users worldwide. With S3 and CloudFront, you can create a highly available, low-latency website that’s easy to set up and ...]]></description><link>https://basir.devsomeware.com/how-to-host-a-react-website-on-aws-s3-and-speed-it-up-with-cloudfront</link><guid isPermaLink="true">https://basir.devsomeware.com/how-to-host-a-react-website-on-aws-s3-and-speed-it-up-with-cloudfront</guid><category><![CDATA[AWS]]></category><category><![CDATA[S3]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[cloudfront]]></category><category><![CDATA[React]]></category><category><![CDATA[hosting]]></category><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Sat, 02 Nov 2024 22:10:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730583396160/0d9a4679-9b49-43ac-818d-f9d806011fae.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In this guide, we'll explore how to host your React application on Amazon S3 and use CloudFront to speed up content delivery to users worldwide. With S3 and CloudFront, you can create a highly available, low-latency website that’s easy to set up and scale.</p>
<p>We’ll also follow production best practices to ensure your app runs securely, efficiently, and delivers optimal performance.</p>
<hr />
<h3 id="heading-why-choose-s3-and-cloudfront-for-hosting">Why Choose S3 and CloudFront for Hosting?</h3>
<p><strong>Amazon S3</strong> is an affordable, durable, and secure solution for static website hosting. It’s perfect for hosting files like HTML, CSS, JavaScript, images, and other assets.<br /><strong>Amazon CloudFront</strong> is a Content Delivery Network (CDN) that caches your website’s content in multiple geographic locations, reducing loading times for users around the world and minimizing load on your S3 bucket.</p>
<p>With these services together, you can host your site with:</p>
<ul>
<li><p><strong>Lower Latency</strong>: CloudFront’s edge locations around the globe speed up content delivery.</p>
</li>
<li><p><strong>Improved Performance</strong>: Caching and compression enhance performance.</p>
</li>
<li><p><strong>Better Security</strong>: Protect your S3 bucket by serving content only through CloudFront.</p>
</li>
</ul>
<hr />
<h2 id="heading-step-by-step-guide-to-host-a-react-website-on-aws-s3-and-cloudfront">Step-by-Step Guide to Host a React Website on AWS S3 and CloudFront</h2>
<h3 id="heading-step-1-build-your-react-app-for-production">Step 1: Build Your React App for Production</h3>
<ol>
<li><p><strong>Initialize and Build the React App</strong></p>
<ul>
<li><p>If you haven’t created your React app yet, start with:</p>
<pre><code class="lang-bash">  npx create-react-app my-app
</code></pre>
<p>  Replace <code>my-app</code> with your project name.</p>
</li>
<li><p>Next, navigate to your app folder:</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">cd</span> my-app
</code></pre>
</li>
<li><p>Build your app for production:</p>
<pre><code class="lang-bash">  npm run build
</code></pre>
<p>  This generates a <code>build</code> folder with optimized files ready for deployment.</p>
</li>
</ul>
</li>
<li><p><strong>Verify Your Build</strong></p>
<ul>
<li>Check the <code>build</code> folder. It should contain your production-ready HTML, CSS, JavaScript, and other assets.</li>
</ul>
</li>
</ol>
<h3 id="heading-step-11-build-your-react-app-for-production">Step 1.1 : Build Your React App for Production</h3>
<ol>
<li><p><strong>Clone your react app from github</strong></p>
<ul>
<li><p>If you donot have an react app please clone it through this url:</p>
<pre><code class="lang-bash">  git <span class="hljs-built_in">clone</span> https://github.com/BasirKhan418/React-bolierplate-code.git
</code></pre>
</li>
<li><p>Next, navigate to your app folder:</p>
<pre><code class="lang-bash">  <span class="hljs-built_in">cd</span> React-bolierplate-code
</code></pre>
</li>
<li><p>Install Dependencies &amp; Build your app for production:</p>
<pre><code class="lang-bash">  npm i 
  npm run build
</code></pre>
<p>  This generates a <code>build</code> folder with optimized files ready for deployment.</p>
</li>
</ul>
</li>
<li><p><strong>Verify Your Build</strong></p>
<ul>
<li>Check the <code>build</code> or dist folder. It should contain your production-ready HTML, CSS, JavaScript, and other assets.</li>
</ul>
</li>
</ol>
<hr />
<h3 id="heading-step-2-set-up-an-s3-bucket-for-static-website-hosting">Step 2: Set Up an S3 Bucket for Static Website Hosting</h3>
<ol>
<li><p><strong>Create an S3 Bucket for Your Website</strong></p>
<ul>
<li><p>Open the <a target="_blank" href="https://s3.console.aws.amazon.com/s3/">S3 Console</a>.</p>
</li>
<li><p>Click <strong>Create Bucket</strong>.</p>
</li>
<li><p>Enter a unique bucket name and choose a region.</p>
</li>
<li><p><strong>Important</strong>: Uncheck <strong>Block all public access</strong> if you want to make the website accessible to everyone.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730584147395/39740234-5d39-45c4-a481-3324321e0416.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Upload Files to S3</strong></p>
<ul>
<li><p>In the <strong>Objects</strong> tab, click <strong>Upload</strong> and select all files from your <code>build</code> /dist folder.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730584467840/23f78238-8129-4676-85d9-94ba8c7c134e.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Configure Public Access</strong></p>
<ul>
<li><p>Go to the <strong>Permissions</strong> tab.</p>
</li>
<li><p>Under <strong>Bucket Policy</strong>, Click on edit bucket policy and paste the content below of it and make sure to add your s3 arn.</p>
<pre><code class="lang-json">  {
      <span class="hljs-attr">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
      <span class="hljs-attr">"Statement"</span>: [
          {
              <span class="hljs-attr">"Sid"</span>: <span class="hljs-string">"Statement1"</span>,
              <span class="hljs-attr">"Principal"</span>: <span class="hljs-string">"*"</span>,
              <span class="hljs-attr">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
              <span class="hljs-attr">"Action"</span>: [
                  <span class="hljs-string">"s3:GetObject"</span>
              ],
              <span class="hljs-attr">"Resource"</span>: <span class="hljs-string">"your s3 arn/*"</span>
          }
      ]
  }
</code></pre>
</li>
</ul>
</li>
<li><p><strong>Enable Static Website Hosting</strong></p>
<ul>
<li><p>Go to <strong>Properties</strong> &gt; <strong>Static website hosting</strong> &gt; <strong>Edit</strong>.</p>
</li>
<li><p>Enable <strong>Static Website Hosting</strong>.</p>
</li>
<li><p>For <strong>Index Document</strong>, enter <code>index.html</code>.</p>
</li>
<li><p>For <strong>Error Document</strong>, also enter <code>index.html</code> (useful for single-page apps to handle routes).</p>
</li>
<li><p>Save your settings.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730584849745/787044a3-b326-49aa-82e8-0a976cbf3234.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730584767163/2ea25ad5-a11e-4fc0-a95d-3f0ff7b9921d.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
<li><p><strong>Test Your S3 Website</strong></p>
<ul>
<li><p>After setting up, open the <strong>Bucket website endpoint</strong> to check if your React app is accessible.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730584799998/3b5a1fc7-6fc4-466e-865b-b536d8278581.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
</li>
</ol>
<hr />
<h3 id="heading-step-3-set-up-cloudfront-for-global-caching-and-lower-latency">Step 3: Set Up CloudFront for Global Caching and Lower Latency</h3>
<ol>
<li><p><strong>Create a CloudFront Distribution</strong></p>
<ul>
<li><p>Go to the <a target="_blank" href="https://console.aws.amazon.com/cloudfront/">CloudFront Console</a>.</p>
</li>
<li><p>Click <strong>Create Distribution</strong> &gt; <strong>Web</strong>.</p>
</li>
</ul>
</li>
<li><p><strong>Configure the Origin</strong></p>
<ul>
<li><p>Under <strong>Origin Domain Name</strong>, select your S3 bucket.</p>
</li>
<li><p>Set <strong>Viewer Protocol Policy</strong> to <strong>Redirect HTTP to HTTPS</strong> for security.</p>
</li>
<li><p>Configure <strong>Allowed HTTP Methods</strong> to <strong>GET, HEAD</strong> only if you don’t need POST requests.</p>
</li>
<li><p>For <strong>Origin Shield</strong>, enable <strong>Use Origin Shield</strong> to add an extra caching layer.</p>
</li>
</ul>
</li>
<li><p><strong>Set Cache Behavior Settings</strong></p>
<ul>
<li><p>In <strong>Cache Behavior Settings</strong>, enable <strong>Compress Objects Automatically</strong> to reduce file sizes.</p>
</li>
<li><p>Choose <strong>Cache Based on Selected Request Headers</strong> as <strong>None</strong> for a basic cache, or <strong>Whitelist</strong> certain headers if you have dynamic content.</p>
</li>
</ul>
</li>
<li><p><strong>Configure Custom Error Pages for Single-Page App (SPA) Routing</strong></p>
<ul>
<li>In the <strong>Error Pages</strong> section, configure a 404 error to redirect to your <code>index.html</code> and set the response code to <code>200</code>. This ensures that client-side routes are handled correctly by serving the main app shell.</li>
</ul>
</li>
<li><p><strong>Restrict S3 Bucket Access to CloudFront Only</strong></p>
<ul>
<li><p>To secure your app, configure <strong>Origin Access Control (OAC)</strong> on CloudFront so only CloudFront can access S3.</p>
</li>
<li><p>In the <strong>Permissions</strong> tab of your S3 bucket, update the bucket policy to allow CloudFront access only.</p>
</li>
</ul>
</li>
<li><p><strong>Create the Distribution</strong></p>
<ul>
<li>Once configured, click <strong>Create Distribution</strong>. It may take a few minutes to deploy.</li>
</ul>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730585082529/1348703c-8261-4b41-b032-f2ef50e9de4a.png" alt class="image--center mx-auto" /></p>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730585149261/f500ecce-4aab-470c-91ef-7febd177186c.png" alt class="image--center mx-auto" /></p>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730585301991/ad4707b0-cdd1-44b7-8aa5-051020d2542d.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-step-4-set-up-a-custom-domain-with-cloudfront-optional">Step 4: Set Up a Custom Domain with CloudFront (Optional)</h3>
<p>To add a custom domain:</p>
<ol>
<li><p><strong>Add Alternate Domain Names</strong></p>
<ul>
<li>In <strong>Alternate Domain Names (CNAMEs)</strong>, add your custom domain (e.g., <a target="_blank" href="http://www.yourdomain.com"><code>www.yourdomain.com</code></a>).</li>
</ul>
</li>
<li><p><strong>Configure SSL with ACM</strong></p>
<ul>
<li><p>Go to <a target="_blank" href="https://console.aws.amazon.com/acm/">AWS Certificate Manager (ACM)</a>, request a certificate for your domain, and validate ownership.</p>
</li>
<li><p>Associate this certificate with your CloudFront distribution.</p>
</li>
</ul>
</li>
<li><p><strong>Update DNS Records</strong></p>
<ul>
<li>Use Route 53 or your DNS provider to create a <strong>CNAME</strong> record pointing to your CloudFront distribution URL.</li>
</ul>
</li>
</ol>
<hr />
<h3 id="heading-production-best-practices-for-hosting-on-s3-and-cloudfront">Production Best Practices for Hosting on S3 and CloudFront</h3>
<p>Here are some best practices to optimize performance, security, and scalability:</p>
<ol>
<li><p><strong>Enable Compression in CloudFront</strong></p>
<ul>
<li>Enable <strong>Compress Objects Automatically</strong> in your CloudFront distribution to reduce file sizes and improve load times.</li>
</ul>
</li>
<li><p><strong>Use Cache-Control Headers</strong></p>
<ul>
<li>Set <strong>Cache-Control</strong> headers for static assets in S3 to control how long content is cached. For example, you can use <code>max-age=31536000</code> to cache files for a year. Remember to update the file name for any new versions to bypass the cache when needed.</li>
</ul>
</li>
<li><p><strong>Enable Logging</strong></p>
<ul>
<li><p>Enable CloudFront <strong>Standard Logging</strong> to monitor performance and access patterns.</p>
</li>
<li><p>Enable S3 <strong>Access Logs</strong> to review bucket access, which is helpful for troubleshooting and analytics.</p>
</li>
</ul>
</li>
<li><p><strong>Invalidate Cache on Updates</strong></p>
<ul>
<li>When updating your website, invalidate the CloudFront cache to ensure that users get the latest version. You can do this by creating an <strong>Invalidation</strong> request in the CloudFront console for files you’ve updated (e.g., <code>/*</code>).</li>
</ul>
</li>
<li><p><strong>Consider Origin Shield for Heavily Accessed Content</strong></p>
<ul>
<li>If you anticipate heavy traffic, enable <strong>Origin Shield</strong> in CloudFront for extra caching between CloudFront and your S3 bucket, which helps reduce origin fetches and can improve cache hit ratio.</li>
</ul>
</li>
</ol>
<hr />
<h3 id="heading-conclusion">Conclusion</h3>
<p>Hosting your React website on AWS with S3 and CloudFront is an affordable and scalable solution that boosts your app’s performance globally. Following the above steps and best practices, you’ll achieve a secure, low-latency, and highly available website setup that’s optimized for users worldwide.</p>
<p>By using CloudFront’s caching, compression, and Origin Shield options, along with carefully managed Cache-Control headers, you’ll ensure that your app is fast and responsive.</p>
]]></content:encoded></item><item><title><![CDATA[Deploy a Node.js App on AWS Lambda with DynamoDB Using the Serverless Framework: A Step-by-Step Guide]]></title><description><![CDATA[Deploying applications in a serverless environment is becoming increasingly popular for its cost-effectiveness and ease of scaling. In this guide, you’ll learn how to deploy a Node.js application on AWS Lambda using the Serverless Framework, and conn...]]></description><link>https://basir.devsomeware.com/deploy-a-nodejs-app-on-aws-lambda-with-dynamodb-using-the-serverless-framework-a-step-by-step-guide</link><guid isPermaLink="true">https://basir.devsomeware.com/deploy-a-nodejs-app-on-aws-lambda-with-dynamodb-using-the-serverless-framework-a-step-by-step-guide</guid><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[AWS]]></category><category><![CDATA[LaMDA]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[serverless]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[S3]]></category><category><![CDATA[Cloud]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[cloudfront]]></category><category><![CDATA[APIs]]></category><category><![CDATA[REST API]]></category><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Sat, 02 Nov 2024 21:14:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730572486014/f5ba1f80-a49f-47ff-b02c-bd2c12b0273d.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Deploying applications in a serverless environment is becoming increasingly popular for its cost-effectiveness and ease of scaling. In this guide, you’ll learn how to deploy a Node.js application on AWS Lambda using the Serverless Framework, and connect it to DynamoDB. By the end, you’ll have a fully functional serverless app that interacts with DynamoDB, all with minimal setup and management!</p>
<hr />
<h2 id="heading-1-what-is-the-serverless-framework"><strong>1. What is the Serverless Framework?</strong></h2>
<p>The Serverless Framework is an open-source framework that helps developers manage serverless deployments on various cloud platforms, including AWS, Azure, and Google Cloud. It simplifies deploying serverless applications by allowing you to define resources and functions in a configuration file, eliminating the need for complex infrastructure setup.</p>
<hr />
<h2 id="heading-2-prerequisites"><strong>2. Prerequisites</strong></h2>
<p>To follow along with this guide, ensure you have the following:</p>
<ul>
<li><p><strong>Node.js and npm</strong>: Install the latest version of Node.js from <a target="_blank" href="http://nodejs.org">nodejs.org</a>.</p>
</li>
<li><p><strong>AWS Account</strong>: Set up an AWS account with access to Lambda and DynamoDB services. <a target="_blank" href="https://aws.amazon.com/">Sign up here</a>.</p>
</li>
<li><p><strong>Serverless Framework</strong>: Install it globally by running the command below if you haven’t already:</p>
<pre><code class="lang-bash">  npm install -g serverless
</code></pre>
</li>
</ul>
<hr />
<h2 id="heading-3-setting-up-a-new-serverless-project"><strong>3. Setting Up a New Serverless Project</strong></h2>
<p>The first step is to create a Serverless project that will serve as the base for deploying your AWS resources and functions.</p>
<h3 id="heading-steps"><strong>Steps:</strong></h3>
<ol>
<li><p><strong>Create the Project</strong><br /> Open your terminal and run the following command:</p>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">Make sure to create your account on serverless website and validate your credentials through terminal.</div>
 </div>

<pre><code class="lang-bash"> serverless
</code></pre>
<p> This will show you several Templates. Choose 3rd option Aws / Node.js / Express Api with Dynamodb</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730572964059/8cef1049-a935-424f-af00-b7d6d6644ba6.png" alt class="image--center mx-auto" /></p>
<p> Then, select your project name in that folder. The Serverless Framework will generate your complete boilerplate code</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730573046568/f62e9d7a-aa0e-4ffa-8fd2-1cd2f76a5c16.png" alt class="image--center mx-auto" /></p>
<ol start="3">
<li>Then select create a new app to continue.</li>
</ol>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730573068996/01768c07-532c-4a6a-83bc-a57f75495ca4.png" alt class="image--center mx-auto" /></p>
<p>    4. Next, provide an app name. A Lambda function will be created with this name</p>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730573997540/24c7645a-11e2-4357-95f2-f501372b555a.png" alt class="image--center mx-auto" /></p>
<ol start="5">
<li><strong>Install Dependencies</strong><br /> All ready some dependencies are installed through boiler plate code and we have to add some more dependencies for this project</li>
</ol>
<pre><code class="lang-bash">    <span class="hljs-built_in">cd</span> <span class="hljs-string">"your project name"</span>
</code></pre>
<pre><code class="lang-bash">    npm i
</code></pre>
<pre><code class="lang-bash">    npm i crypto-js
    npm i jsonwebtoken
    npm i cors
</code></pre>
<p>    The <code>aws-sdk</code> library is essential for integrating our app with AWS services.That is already installed if you selected aws + node.js+dynamodb template.</p>
<hr />
<h2 id="heading-4-configuring-serverlessyml-for-lambda-and-dynamodb"><strong>4. Configuring</strong> <code>serverless.yml</code> for Lambda and DynamoDB</h2>
<p>The <code>serverless.yml</code> file defines your resources and functions. Here’s how to configure it to create a Lambda function and DynamoDB table.</p>
<h3 id="heading-edit-the-serverlessyml-file-as-follows"><strong>Edit the</strong> <code>serverless.yml</code> file as follows:</h3>
<pre><code class="lang-yaml"><span class="hljs-comment"># "org" ensures this Service is used with the correct Serverless Framework Access Key.</span>
<span class="hljs-attr">org:</span> <span class="hljs-string">basir</span> <span class="hljs-comment">#change this according to your orgname</span>
<span class="hljs-comment"># "app" enables Serverless Framework Dashboard features and sharing them with other Services.</span>
<span class="hljs-attr">app:</span> <span class="hljs-string">basirapp</span> <span class="hljs-comment">#change this to your app name</span>
<span class="hljs-comment"># "service" is the name of this project. This will also be added to your AWS resource names.</span>
<span class="hljs-attr">service:</span> <span class="hljs-string">basirapp</span> <span class="hljs-comment">#change this according to your service name</span>

<span class="hljs-attr">stages:</span>
  <span class="hljs-attr">default:</span>
    <span class="hljs-attr">params:</span>
      <span class="hljs-attr">tableName:</span> <span class="hljs-string">"basirtable"</span> <span class="hljs-comment">#change or create this on your aws account</span>

<span class="hljs-attr">provider:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">aws</span>
  <span class="hljs-attr">runtime:</span> <span class="hljs-string">nodejs20.x</span>
  <span class="hljs-attr">region:</span> <span class="hljs-string">ap-south-1</span>
  <span class="hljs-attr">iam:</span>
    <span class="hljs-attr">role:</span>
      <span class="hljs-attr">statements:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-attr">Effect:</span> <span class="hljs-string">Allow</span>
          <span class="hljs-attr">Action:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">dynamodb:Query</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">dynamodb:Scan</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">dynamodb:GetItem</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">dynamodb:PutItem</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">dynamodb:UpdateItem</span>
            <span class="hljs-bullet">-</span> <span class="hljs-string">dynamodb:DeleteItem</span>
          <span class="hljs-attr">Resource:</span>
            <span class="hljs-bullet">-</span> <span class="hljs-attr">Fn::GetAtt:</span> [<span class="hljs-string">UsersTable</span>, <span class="hljs-string">Arn</span>]
  <span class="hljs-attr">environment:</span>
    <span class="hljs-attr">USERS_TABLE:</span> <span class="hljs-string">${param:tableName}</span>
  <span class="hljs-attr">httpApi:</span>
    <span class="hljs-attr">cors:</span>
      <span class="hljs-attr">allowedOrigins:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">'*'</span>  <span class="hljs-comment"># Allows all origins; use specific origins for production</span>
      <span class="hljs-attr">allowedHeaders:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">Content-Type</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">Authorization</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">X-Amz-Date</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">X-Api-Key</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">X-Amz-Security-Token</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">X-Requested-With</span>
      <span class="hljs-attr">allowedMethods:</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">GET</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">POST</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">PUT</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">DELETE</span>
        <span class="hljs-bullet">-</span> <span class="hljs-string">OPTIONS</span>
      <span class="hljs-attr">maxAge:</span> <span class="hljs-number">86400</span>

<span class="hljs-attr">functions:</span>
  <span class="hljs-attr">api:</span>
    <span class="hljs-attr">handler:</span> <span class="hljs-string">handler.handler</span>
    <span class="hljs-attr">events:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">httpApi:</span>
          <span class="hljs-attr">path:</span> <span class="hljs-string">"/{proxy+}"</span>  <span class="hljs-comment"># Correct syntax for a catch-all route</span>
          <span class="hljs-attr">method:</span> <span class="hljs-string">ANY</span>

<span class="hljs-attr">resources:</span>
  <span class="hljs-attr">Resources:</span>
    <span class="hljs-attr">UsersTable:</span>
      <span class="hljs-attr">Type:</span> <span class="hljs-string">AWS::DynamoDB::Table</span>
      <span class="hljs-attr">Properties:</span>
        <span class="hljs-attr">AttributeDefinitions:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">AttributeName:</span> <span class="hljs-string">email</span>
            <span class="hljs-attr">AttributeType:</span> <span class="hljs-string">S</span>
        <span class="hljs-attr">KeySchema:</span>
          <span class="hljs-bullet">-</span> <span class="hljs-attr">AttributeName:</span> <span class="hljs-string">email</span>
            <span class="hljs-attr">KeyType:</span> <span class="hljs-string">HASH</span>
        <span class="hljs-attr">BillingMode:</span> <span class="hljs-string">PAY_PER_REQUEST</span>
        <span class="hljs-attr">TableName:</span> <span class="hljs-string">${param:tableName}</span>
</code></pre>
<h3 id="heading-explanation"><strong>Explanation:</strong></h3>
<ul>
<li><ol>
<li><p><strong>org, app, and service</strong>:</p>
<ul>
<li><p>The <code>org</code> specifies the Serverless Framework organization (<code>basir</code>), ensuring that the service is linked with the correct Serverless Framework Access Key.</p>
</li>
<li><p>The <code>app</code> name (<code>basirapp</code>) enables features on the Serverless Framework Dashboard, facilitating easier management and sharing with other services.</p>
</li>
<li><p>The <code>service</code> (<code>basirapp</code>) is the name of this project, which will also be included in the names of the AWS resources created by this service.</p>
</li>
</ul>
<ol start="2">
<li><p><strong>stages</strong>:</p>
<ul>
<li>Defines a default stage with a parameter <code>tableName</code>, set to <code>"basirtable"</code>. This parameter is referenced throughout the configuration to specify the DynamoDB table name.</li>
</ul>
</li>
<li><p><strong>provider</strong>:</p>
<ul>
<li><p>Specifies AWS as the cloud provider and sets the runtime to <code>nodejs20.x</code>.</p>
</li>
<li><p>The <code>region</code> is defined as <code>ap-south-1</code>.</p>
</li>
<li><p>Configures an IAM role with permissions to perform various DynamoDB actions (<code>Query</code>, <code>Scan</code>, <code>GetItem</code>, <code>PutItem</code>, <code>UpdateItem</code>, and <code>DeleteItem</code>) on the specified DynamoDB table.</p>
</li>
<li><p>The <code>environment</code> section defines an environment variable <code>USERS_TABLE</code>, which utilizes the <code>tableName</code> parameter value (<code>basirtable</code>) for use within Lambda functions.</p>
</li>
</ul>
</li>
<li><p><strong>httpApi</strong>:</p>
<ul>
<li><p>Configures CORS (Cross-Origin Resource Sharing) settings to allow requests from any origin (<code>allowedOrigins: '*'</code>). For production environments, it is advisable to specify allowed origins explicitly.</p>
</li>
<li><p>Specifies the headers and HTTP methods that are permitted in requests, along with a cache duration of <code>maxAge: 86400</code> seconds (24 hours).</p>
</li>
</ul>
</li>
<li><p><strong>functions</strong>:</p>
<ul>
<li><p>Defines a Lambda function named <code>api</code>, with the handler implemented in <code>handler.handler</code>.</p>
</li>
<li><p>Sets up an <code>httpApi</code> event with a catch-all route (<code>/{proxy+}</code>) that allows all HTTP methods (<code>method: ANY</code>) to facilitate flexible request handling.</p>
</li>
</ul>
</li>
<li><p><strong>resources</strong>:</p>
<ul>
<li><p>Creates an AWS DynamoDB table named <code>UsersTable</code>.</p>
</li>
<li><p>The table's primary key is defined by the <code>email</code> attribute, which is of type <code>String</code> (<code>S</code>).</p>
</li>
<li><p>Configured with <code>BillingMode: PAY_PER_REQUEST</code>, allowing the table to automatically scale and bill based on the number of requests.</p>
</li>
<li><p>The table name is dynamically set using the <code>tableName</code> parameter (<code>basirtable</code>).</p>
</li>
</ul>
</li>
</ol>
</li>
</ol>
</li>
</ul>
<hr />
<h2 id="heading-5-writing-the-lambda-handler-code"><strong>5. Writing the Lambda Handler Code</strong></h2>
<p>Next, create the Lambda function to save data to DynamoDB.</p>
<h3 id="heading-steps-1"><strong>Steps:</strong></h3>
<ol>
<li><p><strong>Update handler.js code into your project</strong><br /> In the root of the project. update <code>handler.js</code>.</p>
</li>
<li><p><strong>Add the Following Code</strong>:</p>
<pre><code class="lang-javascript"> <span class="hljs-keyword">const</span> { DynamoDBClient } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"@aws-sdk/client-dynamodb"</span>);

 <span class="hljs-keyword">const</span> {
   DynamoDBDocumentClient,
   GetCommand,
   PutCommand,
 } = <span class="hljs-built_in">require</span>(<span class="hljs-string">"@aws-sdk/lib-dynamodb"</span>);
 <span class="hljs-keyword">const</span> crypto  = <span class="hljs-built_in">require</span>(<span class="hljs-string">"crypto-js"</span>);
 <span class="hljs-keyword">const</span> jwt = <span class="hljs-built_in">require</span>(<span class="hljs-string">"jsonwebtoken"</span>);
 <span class="hljs-keyword">const</span> express = <span class="hljs-built_in">require</span>(<span class="hljs-string">"express"</span>);
 <span class="hljs-keyword">const</span> serverless = <span class="hljs-built_in">require</span>(<span class="hljs-string">"serverless-http"</span>);
 <span class="hljs-keyword">const</span> cors = <span class="hljs-built_in">require</span>(<span class="hljs-string">'cors'</span>);
 <span class="hljs-keyword">const</span> app = express();

 <span class="hljs-keyword">const</span> USERS_TABLE = process.env.USERS_TABLE;
 <span class="hljs-keyword">const</span> AES_SECRET = <span class="hljs-string">"56snbwuy#kdhuyethj39738626rhhgfd"</span>;
 <span class="hljs-keyword">const</span> jwtSecret = <span class="hljs-string">"jskhshs54w57qjhyt2652geftsrhvhagskn@medgus"</span>;
 <span class="hljs-keyword">const</span> client = <span class="hljs-keyword">new</span> DynamoDBClient();
 <span class="hljs-keyword">const</span> docClient = DynamoDBDocumentClient.from(client);

 app.use(express.json());
 <span class="hljs-keyword">const</span> corsOptions = {
   <span class="hljs-attr">origin</span>: <span class="hljs-string">'*'</span>, <span class="hljs-comment">// or '*' for all origins during development</span>
   <span class="hljs-attr">methods</span>: [<span class="hljs-string">'GET'</span>, <span class="hljs-string">'POST'</span>, <span class="hljs-string">'PUT'</span>, <span class="hljs-string">'DELETE'</span>, <span class="hljs-string">'OPTIONS'</span>],
   <span class="hljs-attr">allowedHeaders</span>: [<span class="hljs-string">'Content-Type'</span>, <span class="hljs-string">'Authorization'</span>, <span class="hljs-string">'X-Amz-Date'</span>, <span class="hljs-string">'X-Api-Key'</span>, <span class="hljs-string">'X-Amz-Security-Token'</span>, <span class="hljs-string">'X-Requested-With'</span>],
   <span class="hljs-attr">credentials</span>: <span class="hljs-literal">true</span>,
   <span class="hljs-attr">optionsSuccessStatus</span>: <span class="hljs-number">200</span> <span class="hljs-comment">// Some legacy browsers choke on status 204</span>
 };

 <span class="hljs-comment">// Use CORS with the specified options</span>
 app.use(cors(corsOptions));
 app.get(<span class="hljs-string">"/"</span>,<span class="hljs-function">(<span class="hljs-params">req,res</span>)=&gt;</span>{
   res.send(<span class="hljs-string">"Hello World"</span>)
 })
 <span class="hljs-comment">//all routes starts from here</span>
 app.post(<span class="hljs-string">"/register"</span>, <span class="hljs-keyword">async</span> (req, res) =&gt; {
   <span class="hljs-keyword">const</span> {name,email,password,clg,phone} = req.body
   <span class="hljs-keyword">const</span> hashpass = crypto.AES.encrypt(password,AES_SECRET).toString();
   <span class="hljs-comment">//checking the user is exist or not;</span>
   <span class="hljs-keyword">const</span> getParams = {
     <span class="hljs-attr">TableName</span>: USERS_TABLE,
     <span class="hljs-attr">Key</span>: {
       <span class="hljs-attr">email</span>: email,
     },
   };
   <span class="hljs-keyword">try</span>{
     <span class="hljs-keyword">const</span> data = <span class="hljs-keyword">await</span> docClient.send(<span class="hljs-keyword">new</span> GetCommand(getParams));
   <span class="hljs-keyword">if</span>(data.Item){
     <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">400</span>).json({<span class="hljs-attr">error</span>:<span class="hljs-string">"User already exists"</span>,<span class="hljs-attr">success</span>:<span class="hljs-literal">false</span>})
   }
 }
   <span class="hljs-keyword">catch</span>(err){
     <span class="hljs-built_in">console</span>.log(err)
     res.status(<span class="hljs-number">500</span>).json({<span class="hljs-attr">error</span>:<span class="hljs-string">"Could not create user"</span>,<span class="hljs-attr">success</span>:<span class="hljs-literal">false</span>})
   }
   <span class="hljs-comment">//create user</span>
   <span class="hljs-keyword">const</span> params = {
     <span class="hljs-attr">TableName</span>: USERS_TABLE,
     <span class="hljs-attr">Item</span>: {
       <span class="hljs-attr">name</span>: name,
       <span class="hljs-attr">email</span>: email,
       <span class="hljs-attr">password</span>: hashpass,
       <span class="hljs-attr">clg</span>: clg,
       <span class="hljs-attr">phone</span>: phone,
     },
   };
 <span class="hljs-keyword">try</span>{
 <span class="hljs-keyword">let</span> a = <span class="hljs-keyword">await</span> docClient.send(<span class="hljs-keyword">new</span> PutCommand(params));
 res.status(<span class="hljs-number">200</span>).json({<span class="hljs-attr">message</span>:<span class="hljs-string">"User created successfully"</span>,<span class="hljs-attr">success</span>:<span class="hljs-literal">true</span>})
 }
 <span class="hljs-keyword">catch</span>(err){
   <span class="hljs-built_in">console</span>.log(err)
   res.status(<span class="hljs-number">500</span>).json({<span class="hljs-attr">error</span>:<span class="hljs-string">"Could not create user"</span>,<span class="hljs-attr">success</span>:<span class="hljs-literal">false</span>})
 }
 })
 <span class="hljs-comment">//login endpoint</span>
 app.post(<span class="hljs-string">"/login"</span>, <span class="hljs-keyword">async</span> (req, res) =&gt; {
   <span class="hljs-keyword">try</span>{
     <span class="hljs-keyword">const</span> {email,password} = req.body;
     <span class="hljs-keyword">const</span> params = {
       <span class="hljs-attr">TableName</span>: USERS_TABLE,
       <span class="hljs-attr">Key</span>: {
         <span class="hljs-attr">email</span>: email,
       },
     };
     <span class="hljs-keyword">try</span>{
      <span class="hljs-keyword">let</span> data = <span class="hljs-keyword">await</span> docClient.send(<span class="hljs-keyword">new</span> GetCommand(params));
       <span class="hljs-keyword">if</span>(!data.Item){
         <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">404</span>).json({<span class="hljs-attr">error</span>:<span class="hljs-string">"User not found"</span>,<span class="hljs-attr">success</span>:<span class="hljs-literal">false</span>})
       }
       <span class="hljs-keyword">else</span>{
         <span class="hljs-keyword">const</span> decryptpass = crypto.AES.decrypt(data.Item.password,AES_SECRET).toString(crypto.enc.Utf8);
         <span class="hljs-keyword">if</span>(decryptpass == password){
           <span class="hljs-keyword">const</span> token = jwt.sign({<span class="hljs-attr">email</span>:email},jwtSecret);
           <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">200</span>).json({<span class="hljs-attr">message</span>:<span class="hljs-string">"Login Successfull"</span>,<span class="hljs-attr">success</span>:<span class="hljs-literal">true</span>,<span class="hljs-attr">token</span>:token})
         }
         <span class="hljs-keyword">else</span>{
           <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">401</span>).json({<span class="hljs-attr">error</span>:<span class="hljs-string">"Invalid Password"</span>,<span class="hljs-attr">success</span>:<span class="hljs-literal">false</span>})
         }
       }
     }
     <span class="hljs-keyword">catch</span>(err){
       <span class="hljs-built_in">console</span>.log(err)
       res.status(<span class="hljs-number">500</span>).json({<span class="hljs-attr">error</span>:<span class="hljs-string">"Login Failed ! Db Error."</span>,<span class="hljs-attr">success</span>:<span class="hljs-literal">false</span>})
     }
   }
   <span class="hljs-keyword">catch</span>(err){
     <span class="hljs-built_in">console</span>.log(err)
     res.status(<span class="hljs-number">500</span>).json({<span class="hljs-attr">error</span>:<span class="hljs-string">"Some thing Went Wrong .Try again later!"</span>,<span class="hljs-attr">success</span>:<span class="hljs-literal">false</span>})
   }
 })

 app.use(<span class="hljs-function">(<span class="hljs-params">req, res, next</span>) =&gt;</span> {
   <span class="hljs-keyword">return</span> res.status(<span class="hljs-number">404</span>).json({
     <span class="hljs-attr">error</span>: <span class="hljs-string">"Not Found"</span>,
   });
 });
 <span class="hljs-built_in">exports</span>.handler = serverless(app);
</code></pre>
</li>
</ol>
<h3 id="heading-explanation-1"><strong>Explanation</strong>:</h3>
<ul>
<li><p>This Lambda function, implemented with Node.js and Express, interacts with Amazon DynamoDB to manage user registrations and logins. Key features include:</p>
<ol>
<li><p><strong>Dependencies</strong>: Utilizes <code>@aws-sdk/client-dynamodb</code> for database operations, <code>crypto-js</code> for password encryption, <code>jsonwebtoken</code> for creating JWT tokens, and <code>express</code> for routing.</p>
</li>
<li><p><strong>Environment Variables</strong>: Configures <code>USERS_TABLE</code> for DynamoDB, along with secret keys for AES encryption and JWT signing.</p>
</li>
<li><p><strong>Endpoints</strong>:</p>
<ul>
<li><p><strong>GET</strong> <code>/</code>: Returns "Hello World" to indicate the server is running.</p>
</li>
<li><p><strong>POST</strong> <code>/register</code>: Handles user registration by checking for existing users, encrypting passwords, and saving user data to DynamoDB.</p>
</li>
<li><p><strong>POST</strong> <code>/login</code>: Authenticates users by verifying credentials and returning a JWT token upon successful login.</p>
</li>
</ul>
</li>
<li><p><strong>Error Handling</strong>: Includes robust error handling to provide informative responses for various scenarios, such as user existence and authentication failures.</p>
</li>
<li><p><strong>404 Handling</strong>: Returns a 404 status for any undefined routes.</p>
</li>
<li><p><strong>AWS Lambda Integration</strong>: The <code>exports.handler</code> allows the Express app to run in the AWS Lambda environment.</p>
</li>
</ol>
</li>
</ul>
<p>    In summary, this function provides a secure user management system with encrypted passwords and JWT-based authentication, leveraging DynamoDB for data storage.</p>
<hr />
<h2 id="heading-6-deploying-the-application"><strong>6. Deploying the Application</strong></h2>
<p>With everything configured, let’s deploy your application to AWS Lambda!</p>
<h3 id="heading-steps-2"><strong>Steps:</strong></h3>
<ol>
<li><p><strong>Configure AWS Credentials</strong><br /> Set up your AWS credentials on your local machine:</p>
<pre><code class="lang-bash"> serverless config credentials --provider aws --key YOUR_AWS_ACCESS_KEY --secret YOUR_AWS_SECRET_KEY
</code></pre>
</li>
<li><p><strong>Deploy the Application</strong><br /> Run the following command to deploy your application:</p>
<pre><code class="lang-bash"> serverless deploy
</code></pre>
<p> This command packages your application, uploads it to AWS, and creates the resources specified in <code>serverless.yml</code>.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730576361615/49458fa1-050d-4bf9-b5b3-9b96210ecdf2.png" alt class="image--center mx-auto" /></p>
<p> A DynamoDB table named "basirtable" is created through the <code>serverless.yml</code> configuration file, which defines the primary key as the <code>email</code> attribute. The table operates in <code>PAY_PER_REQUEST</code> billing mode, ensuring efficient scaling and cost management based on actual usage.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730576585614/88c9439f-63a0-4b9d-b404-56d30e1a786e.png" alt class="image--center mx-auto" /></p>
<p> Also check lamda our function is deployed successfully.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730576675721/f035b651-eb09-4bc5-b0b5-b383b7b440c8.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h2 id="heading-7-testing-the-deployment"><strong>7. Testing the Deployment</strong></h2>
<p>After deployment, test the endpoint to confirm that it works as expected.</p>
<ol>
<li><p><strong>Get the Endpoint URL</strong><br /> The deployment process will output an endpoint URL in the terminal.</p>
</li>
<li><p><strong>Send a Request to the URL</strong><br /> Use a tool like curl or Postman to send a GET request to the endpoint:</p>
<pre><code class="lang-bash"> GET https://x60hefyye3.execute-api.ap-south-1.amazonaws.com
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730576808018/460dd8b5-62e8-49cb-862e-120c5ed920bd.png" alt class="image--center mx-auto" /></p>
<p> <strong>Sends a post request to create an account to “/register” end point through postman.</strong></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730577066060/3f764f7d-747d-46ef-bbea-6232f92011b3.png" alt class="image--center mx-auto" /></p>
<p> Feel free to check another end point as well in my case all end point is working fine.</p>
</li>
</ol>
<hr />
<h2 id="heading-8-verifying-dynamodb-data"><strong>8. Verifying DynamoDB Data</strong></h2>
<p>Now, let’s confirm that the data was saved in DynamoDB.</p>
<ol>
<li><p><strong>Go to the DynamoDB Console</strong><br /> In your AWS Management Console, open DynamoDB.</p>
</li>
<li><p><strong>Check the Table</strong><br /> Find your table (<code>MyTable</code>), and open it to view items. You should see the item created by your Lambda function.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730577198201/3817e15c-f912-42bd-860c-b8a70920b743.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h2 id="heading-9-wrapping-up"><strong>9. Wrapping Up</strong></h2>
<p>Congratulations! 🎉 You’ve successfully deployed a Node.js app to AWS Lambda using the Serverless Framework and integrated it with DynamoDB. This setup demonstrates the power of serverless architecture, reducing costs and simplifying scaling while allowing you to quickly deploy applications.</p>
<p>By following this guide, you now have a foundational setup you can build upon, expanding your application’s features or integrating more AWS services. Enjoy building with serverless, and keep experimenting to leverage AWS Lambda and DynamoDB for even more advanced use cases!</p>
<hr />
<h2 id="heading-10-bonous-section">10. Bonous Section</h2>
<p>In this section, we will connect the Lambda function to our frontend via a REST API, allowing seamless interaction between the backend and frontend components. This integration will enable us to handle requests efficiently, enhancing the application's responsiveness and user experience. Let's proceed with setting up the API endpoint to establish this connection.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">I will create a frontend for this integration. You can either clone the provided frontend repository or use your own—both approaches will follow similar steps. We’ll update the API endpoint accordingly to connect it with the Lambda function and ensure a smooth data flow between the frontend and backend. Let's proceed with setting up the frontend and linking it to the updated API endpoint.</div>
</div>

<p>1. Clone the github repo .</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">If you don’t have Git installed on your machine, feel free to download the repository as a ZIP file. Then, extract it, and all the remaining steps will be the same.</div>
</div>

<pre><code class="lang-bash">git <span class="hljs-built_in">clone</span> https://github.com/BasirKhan418/basirapp-serverless-frontend.git
</code></pre>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> basirapp-serverless-frontend
</code></pre>
<pre><code class="lang-bash">npm i
</code></pre>
<ol start="2">
<li><p>Create a <code>.env</code> file and add the following content. Feel free to update the value of <code>VITE_BACKEND_URL</code> with the URL of your own Lambda function deployment or API gateway:</p>
<pre><code class="lang-bash"> VITE_BACKEND_URL=&lt;your_lambda_function_url&gt;
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730579817763/c0e377e7-d931-4e74-b214-61424fb57695.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Build your project / frontend.</p>
</li>
</ol>
<pre><code class="lang-bash">npm run build
</code></pre>
<ol start="4">
<li><p>After a successful build, a <code>dist</code> folder will be generated. This folder contains the production-ready files, which we need to upload to S3 to host the frontend application.</p>
</li>
<li><p>Create an S3 bucket with your desired name. Uncheck the option to block all public access, leave all other settings as default, and click <strong>Create bucket</strong> to proceed.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730580363271/3cf6d713-64d4-4edf-badd-423c3f048a85.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Click on recently created bucket go to permissions and click on edit bucket policy and paste the content below of it and make sure to add your s3 arn.</p>
<pre><code class="lang-bash"> {
     <span class="hljs-string">"Version"</span>: <span class="hljs-string">"2012-10-17"</span>,
     <span class="hljs-string">"Statement"</span>: [
         {
             <span class="hljs-string">"Sid"</span>: <span class="hljs-string">"Statement1"</span>,
             <span class="hljs-string">"Principal"</span>: <span class="hljs-string">"*"</span>,
             <span class="hljs-string">"Effect"</span>: <span class="hljs-string">"Allow"</span>,
             <span class="hljs-string">"Action"</span>: [
                 <span class="hljs-string">"s3:GetObject"</span>
             ],
             <span class="hljs-string">"Resource"</span>: <span class="hljs-string">"your arn/*"</span>
         }
     ]
 }
</code></pre>
</li>
<li><p>Upload you dist folder contents to s3</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730580729210/c919730a-d09f-4d81-af7f-8ec6aaa9f768.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Navigate to the <strong>Properties</strong> tab, scroll down to the <strong>Static website hosting</strong> section at the bottom of the page, and click <strong>Edit</strong>. Enable static website hosting, enter <code>index.html</code> in the <strong>Index document</strong> field, and click <strong>Save changes</strong> to apply.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730580954753/4bdd6a1d-8ee6-438d-afb4-1c47b39dd83e.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>🎉 Hooray! Your frontend has been deployed successfully.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730581109381/e6a62c9f-14c8-4db5-926c-43641dc1ea5c.png" alt class="image--center mx-auto" /></p>
</li>
<li><p>Open the generated link in a new tab, try registering, and check if the data is being saved in DynamoDB.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730581276766/b7e24a61-0b52-4b16-b1c9-f3ccd881ea9b.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<h2 id="heading-conclusion"><strong>Conclusion:</strong></h2>
<p>By following this guide, titled <strong>"Deploy a Node.js App on AWS Lambda with DynamoDB Using Serverless Framework: Step-by-Step Guide,"</strong> you've successfully deployed a full-stack, serverless registration and login system using AWS. Your Node.js backend now operates on AWS Lambda, with DynamoDB handling data storage, ensuring both security and scalability. Additionally, hosting the frontend on S3 offers a cost-effective, high-performance solution for web access.</p>
<p>This setup not only provides a robust foundation for authentication workflows but also streamlines application management by leveraging AWS services. With this approach, you're well-equipped to scale effortlessly, adding new features or integrating additional AWS services as needed.</p>
<p>In summary, this beginner-friendly, step-by-step guide has equipped you with the knowledge to deploy a fully functional serverless application in the cloud, perfect for developers looking to learn AWS Lambda and DynamoDB integration!</p>
]]></content:encoded></item><item><title><![CDATA[How to Host a React App on an EC2 Instance: A Step-by-Step Guide]]></title><description><![CDATA[Hosting your React app on an EC2 instance is a great way to have more control over your web application’s infrastructure. In this guide, I'll walk you through each step, from setting up the EC2 instance to deploying your app. Whether you're a beginne...]]></description><link>https://basir.devsomeware.com/how-to-host-a-react-app-on-an-ec2-instance-a-step-by-step-guide</link><guid isPermaLink="true">https://basir.devsomeware.com/how-to-host-a-react-app-on-an-ec2-instance-a-step-by-step-guide</guid><category><![CDATA[deploy react app]]></category><category><![CDATA[React]]></category><category><![CDATA[aws ec2]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Web Development]]></category><category><![CDATA[AWS Tutorials ]]></category><category><![CDATA[nginx]]></category><category><![CDATA[Node.js]]></category><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Mon, 28 Oct 2024 13:19:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730115205789/7dbd416d-fdb1-4876-9e2f-a2a4c52b018e.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Hosting your React app on an EC2 instance is a great way to have more control over your web application’s infrastructure. In this guide, I'll walk you through each step, from setting up the EC2 instance to deploying your app. Whether you're a beginner or have some experience, this guide is for you!</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>Before we dive in, make sure you have the following:</p>
<ul>
<li><p>A basic React app ready to deploy.</p>
</li>
<li><p>An AWS account.</p>
</li>
<li><p>Basic knowledge of SSH, Linux commands, and AWS services.</p>
</li>
</ul>
<hr />
<h3 id="heading-step-1-create-an-ec2-instance">Step 1: <strong>Create an EC2 Instance</strong></h3>
<ol>
<li><p><strong>Log into your AWS Console</strong><br /> Go to the <a target="_blank" href="https://aws.amazon.com/">AWS Management Console</a> and log in to your account.</p>
</li>
<li><p><strong>Navigate to EC2</strong><br /> From the AWS Console, search for <strong>EC2</strong> and click on <strong>Launch Instance</strong>.</p>
</li>
<li><p><strong>Configure the Instance</strong></p>
<ul>
<li><p><strong>Choose an Amazon Machine Image (AMI)</strong>: Select the latest version of <strong>Ubuntu</strong> (20.04 or higher).</p>
</li>
<li><p><strong>Choose Instance Type</strong>: For most basic projects, a t2.micro instance (eligible for free tier) works fine.</p>
</li>
<li><p><strong>Configure Instance Details</strong>: Keep the default settings.</p>
</li>
<li><p><strong>Add Storage</strong>: Stick with the default size unless your app requires more space.</p>
</li>
<li><p><strong>Add Tags</strong>: This is optional, but you can tag your instance for easy identification.</p>
</li>
<li><p><strong>Configure Security Group</strong>: Allow the following:</p>
<ul>
<li><p><strong>HTTP (port 80)</strong> for web access</p>
</li>
<li><p><strong>SSH (port 22)</strong> for remote login</p>
</li>
<li><p><strong>HTTPS (port 443)</strong> if you plan to use SSL.</p>
</li>
</ul>
</li>
</ul>
</li>
<li><p><strong>Launch the Instance</strong><br /> Choose or create a new key pair to SSH into your instance, then launch the instance. Your EC2 instance will take a few moments to spin up.</p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730115368983/0120a247-afe7-4694-9787-4437f7eead20.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730115459702/3eb80cc8-b39c-4314-a7db-fdfd66cedbba.png" alt class="image--center mx-auto" /></p>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730115475636/0acd1251-41c1-4b09-a0cf-ac763720f62b.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h3 id="heading-step-2-connect-to-your-ec2-instance">Step 2: <strong>Connect to Your EC2 Instance</strong></h3>
<ol>
<li><p><strong>Download your private key (.pem file)</strong><br /> Ensure you have the <code>.pem</code> file of the key pair you created earlier.</p>
</li>
<li><p><strong>Connect via SSH</strong><br /> Open your terminal (or Git Bash on Windows) and run the following command to SSH into your EC2 instance:</p>
</li>
<li><pre><code class="lang-bash">  chmod 443 <span class="hljs-string">"path_to_your_pem_file.pem"</span>
</code></pre>
<pre><code class="lang-bash"> ssh -i <span class="hljs-string">"path_to_your_pem_file.pem"</span> ubuntu@your_ec2_public_ip
</code></pre>
<p> Replace <code>path_to_your_pem_file.pem</code> with the location of your <code>.pem</code> file and <code>your_ec2_public_ip</code> with your EC2 instance’s public IP address (available in the EC2 dashboard).</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730115841141/9c88491e-84a0-415f-a64f-4af5742f9944.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-option-2-connecting-through-aws-console"><strong>Option 2: Connecting Through AWS Console</strong></h4>
<p> If you're unfamiliar with SSH or facing issues, you can also connect directly through the <strong>AWS Console</strong>:</p>
<ol>
<li><p>In your EC2 dashboard, select your instance.</p>
</li>
<li><p>Click on the <strong>Connect</strong> button at the top.</p>
</li>
<li><p>Choose <strong>EC2 Instance Connect</strong>, and you can connect directly from the browser without needing a private key.</p>
</li>
</ol>
</li>
</ol>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730115973312/d50dd0c3-3c4e-42d4-a838-8c1f021400f9.png" alt class="image--center mx-auto" /></p>
<p>    <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730116018140/84fe3e0c-feed-4409-8d2b-9329501a161c.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-step-3-update-and-install-dependencies">Step 3: <strong>Update and Install Dependencies</strong></h3>
<p>Once you’re inside your EC2 instance, it’s time to install the required software.</p>
<ol>
<li><p><strong>Update the system packages</strong>:</p>
<pre><code class="lang-bash"> sudo apt update &amp;&amp; sudo apt upgrade -y
</code></pre>
</li>
<li><p>Installing Node Using the Node Version Manager:<a target="_blank" href="https://www.digitalocean.com/community/tutorials/how-to-install-node-js-on-ubuntu-20-04#option-3-installing-node-using-the-node-version-manager">  
 </a><mark>React apps run on Node.js, so we need to install it.</mark></p>
<pre><code class="lang-bash"> curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh
</code></pre>
<pre><code class="lang-bash"> curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
</code></pre>
<pre><code class="lang-bash"> <span class="hljs-built_in">source</span> ~/.bashrc
</code></pre>
<pre><code class="lang-bash"> nvm install v20.9.0
</code></pre>
<p> You can check the versions installed with:</p>
<pre><code class="lang-bash"> node -v
 npm -v
</code></pre>
</li>
<li><p><strong>Install Nginx</strong>:<br /> Nginx will be used as a reverse proxy to serve your React app.</p>
<pre><code class="lang-bash"> sudo apt install nginx -y
</code></pre>
</li>
<li><p><strong>Start Nginx and enable it</strong>:</p>
<pre><code class="lang-bash"> sudo systemctl start nginx
 sudo systemctl <span class="hljs-built_in">enable</span> nginx
</code></pre>
<p> You can check if Nginx is working by navigating to your EC2 instance’s public IP in a browser. You should see the default Nginx welcome page.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730121330865/c44928e8-a1dd-4898-b556-d17220fe4af3.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h3 id="heading-step-4-configure-firewall-rules">Step 4: <strong>Configure Firewall Rules</strong></h3>
<ol>
<li><p><strong>Open necessary ports</strong>:<br /> Make sure your security group allows incoming traffic on HTTP (port 80). You can modify this from the AWS EC2 console under <strong>Security Groups</strong>.</p>
</li>
<li><div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">We have done this step while creating ec2 instance in this blog.</div>
 </div>


</li>
</ol>
<hr />
<h3 id="heading-step-5-deploy-your-react-app">Step 5: <strong>Deploy Your React App</strong></h3>
<p>Now it's time to get your React app onto the server.</p>
<ol>
<li><p><strong>Clone your React app from GitHub</strong>:</p>
<p> First, install Git:</p>
<pre><code class="lang-bash"> sudo apt install git -y
</code></pre>
<p> Then, clone your app:</p>
<pre><code class="lang-bash"> git <span class="hljs-built_in">clone</span> https://github.com/your-username/your-react-app.git
</code></pre>
</li>
<li><p><strong>Navigate to your app directory</strong>:</p>
<pre><code class="lang-bash"> <span class="hljs-built_in">cd</span> your-react-app
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730117117091/2af4c9b3-f650-49e3-aaee-a9935fedd407.png" alt class="image--center mx-auto" /></p>
</li>
<li><p><strong>Install the dependencies</strong>:</p>
<pre><code class="lang-bash"> npm install
</code></pre>
</li>
<li><p><strong>Build your React app</strong>:</p>
<pre><code class="lang-bash"> npm run build
</code></pre>
<p> This command creates a production-ready version of your React app in the <mark>build/dist</mark> folder.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730117317444/c9d6a420-6559-4504-b75d-3c901ab0a928.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h3 id="heading-step-6-configure-nginx-to-serve-the-react-app">Step 6: <strong>Configure Nginx to Serve the React App</strong></h3>
<ol>
<li><p><strong>Remove the default Nginx configuration</strong>:</p>
<pre><code class="lang-bash"> sudo rm /etc/nginx/nginx.conf
</code></pre>
</li>
<li><p><strong>Create a new Nginx configuration file</strong> for your React app:</p>
<pre><code class="lang-bash"> sudo vi /etc/nginx/nginx.conf
</code></pre>
</li>
<li><p><strong>Add the following configuration</strong>:</p>
<pre><code class="lang-nginx"> <span class="hljs-section">events</span> {
     <span class="hljs-comment"># Event directives...</span>
 }

 <span class="hljs-section">http</span> {
     <span class="hljs-section">server</span> {
     <span class="hljs-attribute">listen</span> <span class="hljs-number">80</span>;
     <span class="hljs-attribute">server_name</span> <span class="hljs-string">"your public ip"</span>;

     <span class="hljs-attribute">location</span> / {
         <span class="hljs-attribute">proxy_pass</span> http://localhost:{<span class="hljs-attribute">PORT</span> LIKE <span class="hljs-number">8080</span>};
         <span class="hljs-attribute">proxy_http_version</span> <span class="hljs-number">1</span>.<span class="hljs-number">1</span>;
         <span class="hljs-attribute">proxy_set_header</span> Upgrade <span class="hljs-variable">$http_upgrade</span>;
         <span class="hljs-attribute">proxy_set_header</span> Connection <span class="hljs-string">'upgrade'</span>;
         <span class="hljs-attribute">proxy_set_header</span> Host <span class="hljs-variable">$host</span>;
         <span class="hljs-attribute">proxy_cache_bypass</span> <span class="hljs-variable">$http_upgrade</span>;
     }
     }
 }
</code></pre>
</li>
<li><p><strong>Reload Nginx</strong></p>
<pre><code class="lang-bash"> sudo nginx -s reload
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730119836027/d0599479-4121-4766-aaa7-557817de64c6.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h2 id="heading-step-7-start-your-react-app">Step 7: <strong>Start your react app</strong></h2>
<p>Before starting the app we have to install some dependecies.</p>
<h3 id="heading-1-pm2-process-manager-2">1. <strong>PM2 (Process Manager 2)</strong>:</h3>
<p>PM2 is a production-ready process manager for Node.js applications. It allows you to run, manage, and monitor Node.js processes in the background. Key features include:</p>
<ul>
<li><p><strong>Process management</strong>: Easily start, stop, restart, and manage multiple Node.js applications.</p>
</li>
<li><p><strong>Monitoring</strong>: Provides real-time logs, metrics, and performance monitoring.</p>
</li>
<li><p><strong>Load balancing</strong>: Automatically balances load across multiple CPU cores for better performance.</p>
</li>
<li><p><strong>Startup scripts</strong>: Automatically restarts apps on crashes or server reboots.</p>
</li>
</ul>
<p><strong>Installation</strong>:</p>
<pre><code class="lang-bash">npm install -g pm2
</code></pre>
<h3 id="heading-2-serve">2. <strong>Serve</strong>:</h3>
<p>Serve is a simple and lightweight HTTP server designed to serve static files, such as the <code>dist</code> folder generated by build tools like React. It's ideal for quickly serving a single-page application (SPA) or static content.</p>
<p>Key features include:</p>
<ul>
<li><p><strong>Easy static file hosting</strong>: Serve static files from any folder with a single command.</p>
</li>
<li><p><strong>Single-page application (SPA) support</strong>: Perfect for apps like React, with fallback routing.</p>
</li>
<li><p><strong>Lightweight and fast</strong>: Minimal setup for quick static file hosting.</p>
</li>
</ul>
<p><strong>Installation</strong>:</p>
<pre><code class="lang-bash">npm install -g serve
</code></pre>
<p>These two libraries together allow you to serve static content (like a React app) and manage its processes efficiently in a production environment.</p>
<h3 id="heading-3-finally-starting-the-react-app">3 .Finally starting the react app</h3>
<pre><code class="lang-bash"><span class="hljs-built_in">cd</span> <span class="hljs-string">"your react app folder"</span>
</code></pre>
<pre><code class="lang-bash">pm2 start <span class="hljs-string">"serve -s dist -l 8080"</span> --name react-app
</code></pre>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730120484276/7c380ff7-db34-4a35-9243-13de0c8928d8.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-step-8-access-your-react-app">Step 8: <strong>Access Your React App</strong></h3>
<p>Now, when you navigate to your EC2 instance’s public IP in a browser, you should see your React app running!</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730120551319/93d74555-7ac8-47ab-9248-400ea44b2e08.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-bonus-add-a-custom-domain-to-your-app">Bonus: <strong>Add a custom domain to your app.</strong></h3>
<ol>
<li><p>Login to your domain name provider and add a “A” Record with your corresponding to the public ip.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730120801530/c105577c-2f04-48eb-9eb7-0542adba96d7.png" alt class="image--center mx-auto" /></p>
 <div data-node-type="callout">
 <div data-node-type="callout-emoji">💡</div>
 <div data-node-type="callout-text">In my case, I am using Cloudflare for managing DNS, so I don't need to add an SSL certificate. However, if you're not using Cloudflare, please secure your app by adding an SSL certificate through Let's Encrypt or AWS Certificate Manager.</div>
 </div>
</li>
<li><p>Update your Nginx configuration (Optional)</p>
<pre><code class="lang-bash"> events {
     <span class="hljs-comment"># Event directives...</span>
 }

 http {
     server {
     listen 80;
     server_name <span class="hljs-string">"your domain name"</span>;

     location / {
         proxy_pass http://localhost:{PORT LIKE 8080};
         proxy_http_version 1.1;
         proxy_set_header Upgrade <span class="hljs-variable">$http_upgrade</span>;
         proxy_set_header Connection <span class="hljs-string">'upgrade'</span>;
         proxy_set_header Host <span class="hljs-variable">$host</span>;
         proxy_cache_bypass <span class="hljs-variable">$http_upgrade</span>;
     }
     }
 }
</code></pre>
<p> <img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730121207541/99d1f327-6bdc-4064-aa7f-805a7c2d1403.png" alt class="image--center mx-auto" /></p>
</li>
</ol>
<hr />
<h3 id="heading-conclusion">Conclusion</h3>
<p>That’s it! 🎉 You’ve successfully hosted your React app on an EC2 instance. By following these steps, you’ve learned how to:</p>
<ul>
<li><p>Set up an EC2 instance.</p>
</li>
<li><p>Install necessary dependencies (Node.js, npm, Nginx).</p>
</li>
<li><p>Deploy a React app using Nginx as a reverse proxy.</p>
</li>
<li><p>Optionally, add a custom domain to your app.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Dockerfile Generation Made Easy: Build, Download, and Commit with DockerGen]]></title><description><![CDATA[Creating a Dockerfile from scratch can be time-consuming and complicated, especially if you're aiming to follow Docker best practices. But don’t worry—we’ve built a Dockerfile Generation Website that simplifies the entire process. With our tool, you ...]]></description><link>https://basir.devsomeware.com/dockerfile-generation-made-easy-build-download-and-commit-with-dockergen</link><guid isPermaLink="true">https://basir.devsomeware.com/dockerfile-generation-made-easy-build-download-and-commit-with-dockergen</guid><category><![CDATA[Free Dockerfile Generator]]></category><category><![CDATA[Docker]]></category><category><![CDATA[Dockerfile]]></category><category><![CDATA[GitHub]]></category><category><![CDATA[oauth]]></category><category><![CDATA[Devops]]></category><category><![CDATA[containerization]]></category><category><![CDATA[best practices]]></category><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Fri, 18 Oct 2024 14:18:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729260794888/702e2e22-2b9c-4707-8125-155cc20f06c1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Creating a Dockerfile from scratch can be time-consuming and complicated, especially if you're aiming to follow Docker best practices. But don’t worry—we’ve built a <strong>Dockerfile Generation Website</strong> that simplifies the entire process. With our tool, you can generate optimized Dockerfiles, download them, or evAuthen commit them directly to your GitHub repository in just a few clicks.</p>
<p>And the best part? It’s <strong>completely free</strong> for everyone! 🎉</p>
<p>This blog will show you how to use our tool, explain the two available options (using a repo URL or GitHub OAuth), and walk you through how to download or commit your Dockerfile.</p>
<hr />
<h2 id="heading-features-at-a-glance">Features at a Glance</h2>
<ul>
<li><p><strong>Automatic Dockerfile generation</strong> with best practices:</p>
<ul>
<li><p>Uses lightweight base images (like Alpine) for smaller builds.</p>
</li>
<li><p>Minimizes image layers for optimized image size.</p>
</li>
<li><p>Groups commands efficiently for better caching.</p>
</li>
</ul>
</li>
<li><p><strong>Two options</strong> for generating Dockerfiles:</p>
<ul>
<li><p><strong>Paste a repository URL</strong> (no login required).</p>
</li>
<li><p><strong>Log in via GitHub OAuth</strong> to access and commit Dockerfiles directly to your repositories.</p>
</li>
</ul>
</li>
<li><p><strong>Simple, user-friendly interface</strong>.</p>
</li>
<li><p><strong>Free for all users</strong>.</p>
</li>
</ul>
<hr />
<h2 id="heading-how-to-use-the-dockerfile-generator">How to Use the Dockerfile Generator</h2>
<p>You can either use our tool <strong>without logging in</strong> by pasting a repository URL or <strong>log in with GitHub OAuth</strong> to commit Dockerfiles directly to your repository.</p>
<p>Let’s break it down:</p>
<hr />
<h3 id="heading-option-1-generate-dockerfile-by-pasting-a-repo-url-no-login-required"><strong>Option 1: Generate Dockerfile by Pasting a Repo URL (No Login Required)</strong></h3>
<p>If you prefer not to log in, you can generate a Dockerfile by pasting the URL of your public GitHub repository.</p>
<h4 id="heading-step-1-paste-your-repo-url"><strong>Step 1: Paste Your Repo URL</strong></h4>
<p>Simply copy the <strong>URL</strong> of your GitHub repository and paste it into the designated field on our website.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729258025269/2796e03a-b4c1-4687-9ff8-67b8750ec827.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-2-dockergen-processing-amp-scanning"><strong>Step 2: Dockergen Processing &amp; Scanning</strong></h4>
<p>Dockergen detected technology through your repo url and Scanning whole project.</p>
<ul>
<li><p><strong>Detecting Technology</strong>.</p>
</li>
<li><p><strong>Scanning whole repo</strong>.</p>
</li>
<li><p>Generate a docker file.</p>
</li>
</ul>
<h4 id="heading-step-3-generate-dockerfile"><strong>Step 3: Generate Dockerfile</strong></h4>
<p>Click <strong>"Generate Dockerfile"</strong> and our tool will generate an optimized Dockerfile, ensuring best practices are followed for smaller and faster builds.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729258155765/335b7d57-984d-4943-b03f-f59a15f26d69.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-4-download-dockerfile"><strong>Step 4: Download Dockerfile</strong></h4>
<p>Once the Dockerfile is generated, you can download it directly to your local machine. <strong>Committing to GitHub is not supported</strong> if you haven’t logged in.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729258204842/cf83c323-4a91-4d1d-85a5-de3146881f7f.png" alt class="image--center mx-auto" /></p>
<hr />
<h3 id="heading-option-2-log-in-with-github-oauth-and-commit-dockerfile"><strong>Option 2: Log in with GitHub OAuth and Commit Dockerfile</strong></h3>
<p>For a more integrated experience, you can log in via <strong>GitHub OAuth</strong> and commit your Dockerfile directly to your repository.</p>
<h4 id="heading-step-1-log-in-with-github"><strong>Step 1: Log in with GitHub</strong></h4>
<p>Click the <strong>"Log in with GitHub"</strong> button on our homepage to authenticate securely using <strong>GitHub OAuth</strong>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729258806908/1a9c7472-1486-4765-94c8-539878ce5426.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729258868288/3b145c54-9f68-4591-b8a7-49b37261e473.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729258889216/858df3f5-751f-4fcf-b560-c1d2e647b6f6.png" alt class="image--center mx-auto" /></p>
<p>Click on authorize button to signin.</p>
<h4 id="heading-step-2-select-a-repository"><strong>Step 2: Select a Repository</strong></h4>
<p>Once logged in, you’ll see a list of all your GitHub repositories. Select the repository you want to generate a Dockerfile for.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729259269760/f87d860e-b482-45f6-b4c2-39d9405f7bbb.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-3-generate-dockerfile-1"><strong>Step 3: Generate Dockerfile</strong></h4>
<p>Hit <strong>"Generate Dockerfile"</strong> to get an optimized Dockerfile for the selected repository.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729259416559/b086389c-47ad-478c-a827-dfd348764c1d.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-4-commit-dockerfile-to-github"><strong>Step 4: Commit Dockerfile to GitHub</strong></h4>
<p>Once the Dockerfile is generated, you have two options:</p>
<ul>
<li><p><strong>Download</strong> it to your local machine.</p>
</li>
<li><p><strong>Commit</strong> it directly to your GitHub repository by selecting the <strong>branch</strong>, entering a <strong>commit message</strong>, and clicking <strong>"Commit to GitHub"</strong>.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729259452326/08c2ecca-5a52-4980-9cbf-23edaba9ce78.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729260458769/eabe91e2-03b8-40bf-a907-ff55703ebd26.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729260654301/443e594e-d710-44d3-a0ca-7b04c0d347cc.png" alt class="image--center mx-auto" /></p>
<hr />
<h2 id="heading-why-use-our-dockerfile-generator">Why Use Our Dockerfile Generator?</h2>
<h3 id="heading-1-time-saving">1. <strong>Time-Saving</strong></h3>
<p>Manually writing Dockerfiles takes time, especially if you want to follow best practices. Our tool handles this for you, generating optimized Dockerfiles in seconds.</p>
<h3 id="heading-2-optimized-for-performance">2. <strong>Optimized for Performance</strong></h3>
<p>We use <strong>lightweight base images</strong> like Alpine, minimize image layers, and optimize caching layers to ensure faster builds and smaller image sizes.</p>
<h3 id="heading-3-two-flexible-options">3. <strong>Two Flexible Options</strong></h3>
<p>Whether you want to <strong>paste a repo URL</strong> and generate a Dockerfile without logging in, or <strong>log in via GitHub</strong> for more advanced features, we’ve got you covered.</p>
<h3 id="heading-4-free-for-everyone">4. <strong>Free for Everyone</strong></h3>
<p>The tool is <strong>free</strong>—no hidden fees or premium plans. It’s perfect for developers at any level.</p>
<hr />
<h2 id="heading-how-this-tool-boosts-your-workflow">How This Tool Boosts Your Workflow</h2>
<p>Our Dockerfile Generator is designed to integrate seamlessly with your development workflow:</p>
<ul>
<li><p>If you’re in a rush, simply paste your repo URL and get a Dockerfile instantly.</p>
</li>
<li><p>For long-term projects, log in, generate, and commit Dockerfiles directly to your repositories for better version control.</p>
</li>
</ul>
<p>The process is smooth and optimized to save you time while following <strong>Docker best practices</strong>.</p>
<hr />
<h2 id="heading-conclusion">Conclusion</h2>
<p>Generating Dockerfiles is now easier than ever with our <strong>Dockerfile Generation Website</strong>. Whether you choose to <strong>paste a repository URL</strong> or <strong>log in via GitHub OAuth</strong>, you can quickly generate, download, or commit Dockerfiles in just a few steps.</p>
<p>Our tool is designed to follow Docker’s best practices, making your Dockerfiles smaller, faster, and easier to manage. Best of all, it’s <strong>free</strong> for everyone!</p>
<hr />
<h3 id="heading-faqs"><strong>FAQs</strong></h3>
<p><strong>Q1: Is this service really free?</strong> Yes, our Dockerfile generation tool is completely free for all users.</p>
<p><strong>Q2: Can I use this tool without logging in?</strong> Yes, you can generate Dockerfiles without logging in by pasting the URL of a public GitHub repository.</p>
<p><strong>Q3: How does GitHub OAuth work?</strong> GitHub OAuth allows you to securely log in to your GitHub account, access your repositories, and commit Dockerfiles directly from the tool.</p>
<p><strong>Q4: Is there support for private repositories?</strong> Yes, but you need to log in via GitHub OAuth to access and commit Dockerfiles to private repositories.</p>
]]></content:encoded></item><item><title><![CDATA[Getting Started with Cloudflare Tunnels: A Step-by-Step Guide 🌐🚀]]></title><description><![CDATA[Securing and improving the speed of your online apps is more important than ever in the current digital era. A particularly noteworthy solution is Cloudflare Tunnels. You may open ports on your firewall and expose your local web server to the interne...]]></description><link>https://basir.devsomeware.com/getting-started-with-cloudflare-tunnels-a-step-by-step-guide</link><guid isPermaLink="true">https://basir.devsomeware.com/getting-started-with-cloudflare-tunnels-a-step-by-step-guide</guid><category><![CDATA[devops a]]></category><category><![CDATA[cloudflare]]></category><category><![CDATA[Devops]]></category><dc:creator><![CDATA[Basir Khan]]></dc:creator><pubDate>Sun, 13 Oct 2024 12:47:07 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728821512439/efa35950-be22-497b-858a-6b6e416e6188.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Securing and improving the speed of your online apps is more important than ever in the current digital era. A particularly noteworthy solution is Cloudflare Tunnels. You may open ports on your firewall and expose your local web server to the internet with this handy tool. This tutorial will assist you in understanding and using Cloudflare Tunnels, regardless of whether you're a developer testing a new project or a company owner trying to increase security.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">😊</div>
<div data-node-type="callout-text">Host Your Application on Your Laptop! 😂</div>
</div>

<p>Why wait for deployment? With Cloudflare Tunnels, you can host your app right from your laptop!</p>
<ul>
<li><p><strong>No More "Did you push it to Git?"</strong> – Your app is live on your laptop! 🖥️✨</p>
</li>
<li><p><strong>Impress Your Friends</strong> – “Oh, just hosting my app on my laptop!” 😎</p>
</li>
</ul>
<p>Go ahead, showcase your project while sipping your coffee! ☕🎉</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">👎</div>
<div data-node-type="callout-text">Avoid Hosting on Your Laptop Directly without using cloudflare tunnel or ngrok🚫</div>
</div>

<p><a target="_blank" href="https://verpex.com/blog/hosting-service-explained/can-i-host-a-website-on-my-computer#:~:text=Make%20sure%20hosting%20is%20allowed,every%20day%20of%20the%20week.">Read it why ?</a></p>
<h1 id="heading-what-are-cloudflare-tunnels">What Are Cloudflare Tunnels? 🕳️</h1>
<p>Cloudflare Tunnels (formerly known as Argo Tunnels) create a secure, outbound-only connection between your server and Cloudflare's global network. This means you can protect your applications without exposing your server's IP address, which significantly enhances your security posture.</p>
<h3 id="heading-key-benefits-of-using-cloudflare-tunnels">Key Benefits of Using Cloudflare Tunnels:</h3>
<ul>
<li><p><strong>Enhanced Security</strong>: Protects your server from direct exposure to the internet.</p>
</li>
<li><p><strong>Easy Setup</strong>: Set up with just a few commands—no complex configurations required.</p>
</li>
<li><p><strong>Global Load Balancing</strong>: Routes traffic efficiently across multiple servers.</p>
</li>
</ul>
<p><strong>Automatic HTTPS</strong>: Ensures secure connections without manual SSL certificate management.</p>
<p><img src="https://global.discourse-cdn.com/cloudflare/original/3X/7/e/7e8f31b6ad03104b2a6bcc18cd3671833ff1884f.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-story-time-meet-alex">Story Time: Meet Alex 👨‍💻</h2>
<p>Let's illustrate the power of Cloudflare Tunnels with a story. Meet Alex, a web developer who recently launched a personal blog. One evening, while working on a new feature, he decided to test it locally. However, he faced a common dilemma: how to showcase his work without compromising security.</p>
<p><strong>Enter Cloudflare Tunnels.</strong> With just a few commands, Alex was able to expose his local development server securely. His friends and colleagues could now access his blog at a unique URL without the risk of exposing his local environment to potential attacks.</p>
<h1 id="heading-how-it-works">How it works ?</h1>
<p>Cloudflare Tunnel provides you with a secure way to connect your resources to Cloudflare without a publicly routable IP address. With Tunnel, you do not send traffic to an external IP — instead, a lightweight daemon in your infrastructure (<code>cloudflared</code>) creates outbound-only connections to Cloudflare’s global network. Cloudflare Tunnel can connect HTTP web servers, <a target="_blank" href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/use-cases/ssh/">SSH servers</a>, <a target="_blank" href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/use-cases/rdp/">remote desktops</a>, and other protocols safely to Cloudflare. This way, your origins can serve traffic through Cloudflare without being vulnerable to attacks that bypass Cloudflare.</p>
<p>Cloudflared establishes outbound connections (tunnels) between your resources and Cloudflare’s global network. Tunnels are persistent objects that route traffic to DNS records. Within the same tunnel, you can run as many ‘cloudflared’ processes (connectors) as needed. These processes will establish connections to Cloudflare and send traffic to the nearest Cloudflare data center.</p>
<p>Refer to our <a target="_blank" href="https://developers.cloudflare.com/reference-architecture/architectures/sase/">reference architecture</a> for details on how to implement Cloudflare Tunnel into your existing infrastructure</p>
<p><img src="https://developers.cloudflare.com/_astro/handshake.eh3a-Ml1_ZvgY0m.webp" alt class="image--center mx-auto" /></p>
<h2 id="heading-installing-cloudflare-tunnel"><strong>Installing Cloudflare Tunnel</strong></h2>
<p>Cloudflare Tunnel requires the installation of a lightweight server-side daemon, <code>cloudflared</code>, to connect your infrastructure to Cloudflare. If you are <a target="_blank" href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-remote-tunnel/">creating a tunnel through the dashboard</a>, you can simply copy-paste the installation command shown in the dashboard.</p>
<p>To download and install <code>cloudflared</code> manually, use one of the following links.</p>
<h2 id="heading-github-repository"><strong>GitHub repository</strong></h2>
<p><code>cloudflared</code> is an <a target="_blank" href="https://github.com/cloudflare/cloudflared">open source project ↗</a> maintained by Cloudflare.</p>
<ul>
<li><p><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases">All releases ↗</a></p>
</li>
<li><p><a target="_blank" href="https://github.com/cloudflare/cloudflared/blob/master/RELEASE_NOTES">Release notes ↗</a></p>
</li>
</ul>
<h2 id="heading-latest-release"><strong>Latest release</strong></h2>
<h3 id="heading-linux"><strong>Linux</strong></h3>
<p>You can download and install <code>cloudflared</code> via the <a target="_blank" href="https://pkg.cloudflare.com/">Cloudflare Package Repository ↗</a>.</p>
<p>Alternatively, download the latest release directly:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Type</strong></td><td><strong>amd64 / x86-64</strong></td><td><strong>x86 (32-bit)</strong></td><td><strong>ARM</strong></td><td><strong>ARM64</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Binary</td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-386">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64">Download ↗</a></td></tr>
<tr>
<td>.deb</td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-386.deb">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm.deb">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb">Download ↗</a></td></tr>
<tr>
<td>.rpm</td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-x86_64.rpm">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-386.rpm">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm.rpm">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-aarch64.rpm">Download ↗</a></td></tr>
</tbody>
</table>
</div><h3 id="heading-macos"><strong>macOS</strong></h3>
<p>Download and install <code>cloudflared</code> via Homebrew:</p>
<p><strong>Terminal window</strong></p>
<pre><code class="lang-bash">brew install cloudflared
</code></pre>
<p>Alternatively, download the <a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-darwin-arm64.tgz">latest Darwin arm64 release ↗</a> or <a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-darwin-amd64.tgz">latest Darwin amd64 release ↗</a> directly.</p>
<h3 id="heading-windows"><strong>Windows</strong></h3>
<p>Download and install <code>cloudflared</code> via <a target="_blank" href="https://learn.microsoft.com/en-us/windows/package-manager/winget/">winget ↗</a>:</p>
<p><strong>Terminal window</strong></p>
<pre><code class="lang-bash">winget install --id Cloudflare.cloudflared
</code></pre>
<p>Alternatively, download the latest release directly:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td><strong>Type</strong></td><td><strong>32-bit</strong></td><td><strong>64-bit</strong></td></tr>
</thead>
<tbody>
<tr>
<td>Executable</td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-windows-386.exe">Download ↗</a></td><td><a target="_blank" href="https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-windows-amd64.exe">Download ↗</a></td></tr>
</tbody>
</table>
</div><aside><p>Note</p><section><p>Instances of<span> </span><code>cloudflared</code><span> </span>do not automatically update on Windows. You will need to perform manual updates.</p></section></aside>

<h3 id="heading-docker"><strong>Docker</strong></h3>
<p>A Docker image of <code>cloudflared</code> is <a target="_blank" href="https://hub.docker.com/r/cloudflare/cloudflared">available on DockerHub ↗</a>.</p>
<h2 id="heading-deprecated-releases"><strong>Deprecated releases</strong></h2>
<p>Cloudflare supports versions of <code>cloudflared</code> that are within one year of the most recent release. Breaking changes unrelated to feature availability may be introduced that will impact versions released more than one year ago. For example, as of January 2023 Cloudflare will support <code>cloudflared</code> version 2023.1.1 to cloudflared 2022.1.1.</p>
<p>To update <code>cloudflared</code>, refer to <a target="_blank" href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/downloads/update-cloudflared/">these instructions</a>.</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">ℹ</div>
<div data-node-type="callout-text">Depends upon your os install the above cloudflared daemon</div>
</div>

<h1 id="heading-check-it-cloudflare-is-correctly-installed-or-not">Check it cloudflare is correctly installed or not ?</h1>
<p>Cloudflare Tunnel can be installed on Windows, Linux, and macOS. as above we will discuss.</p>
<p>Confirm that <code>cloudflared</code> is installed correctly by running <code>cloudflared --version</code> in your command line:</p>
<p><strong>Terminal window</strong></p>
<pre><code class="lang-bash">cloudflared --version
</code></pre>
<h3 id="heading-result-shoud-be">Result shoud be 👇</h3>
<pre><code class="lang-bash">cloudflared version 2021.5.9 (built 2021-05-21-1541 UTC)
</code></pre>
<h2 id="heading-run-a-local-service"><strong>Run a local service</strong></h2>
<p>The easiest way to get up and running with Cloudflare Tunnel is to have an application running locally, such as a <a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-react-site/">React or</a> <a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-svelte-site/">Svelte site</a>. Whe<a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-react-site/">n you</a> are <a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-svelte-site/">develo</a>ping an application with these frameworks, they will often make use of a <code>npm run develop</code> script, or something similar, which mounts the application and runs it on a <a target="_blank" href="http://localhost"><code>localhost</code></a> port. For example, the popular <code>create-react-app</code> tool runs your in-development React application on port <code>3000</code>, making it accessible at the <a target="_blank" href="http://localhost:3000"><code>http://localhost:3000</code></a> address.</p>
<h2 id="heading-start-a-cloudflare-tunnel"><strong>Start a Cloudflare Tunnel</strong></h2>
<p>With a local development server running, a new Cloudflare Tunnel can be instantiated by running <code>cloudflared tunn</code><a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-react-site/"><code>el</code> in</a> a ne<a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-svelte-site/">w comm</a>and line wi<a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-react-site/">ndow,</a> pass<a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-svelte-site/">ing in</a> the <code>--url</code> flag with your <a target="_blank" href="http://localhost"><code>localhost</code></a> URL and port. <code>cloudflared</code> will output logs to your command line, including a banner with a tunnel URL:</p>
<p><strong>Terminal window</strong></p>
<pre><code class="lang-bash">cloudflared tunnel --url http://localhost:3000
</code></pre>
<pre><code class="lang-bash">2021-07-15T20:11:29Z INF Cannot determine default configuration path. No file [config.yml config.yaml] <span class="hljs-keyword">in</span> [~/.cloudflared ~/.cloudflare-warp ~/cloudflare-warp /etc/cloudflared /usr/<span class="hljs-built_in">local</span>/etc/cloudflared]2021-07-15T20:11:29Z INF Version 2021.5.92021-07-15T20:11:29Z INF GOOS: linux, GOVersion: devel +11087322f8 Fri Nov 13 03:04:52 2020 +0100, GoArch: amd642021-07-15T20:11:29Z INF Settings: map[url:http://localhost:3000]2021-07-15T20:11:29Z INF cloudflared will not automatically update when run from the shell. To <span class="hljs-built_in">enable</span> auto-updates, run cloudflared as a service: https://developers.cloudflare.com/argo-tunnel/reference/service/2021-07-15T20:11:29Z INF Initial protocol h2mux2021-07-15T20:11:29Z INF Starting metrics server on 127.0.0.1:42527/metrics2021-07-15T20:11:29Z WRN Your version 2021.5.9 is outdated. We recommend upgrading it to 2021.7.02021-07-15T20:11:29Z INF Connection established connIndex=0 location=ATL2021-07-15T20:11:32Z INF Each HA connection<span class="hljs-string">'s tunnel IDs: map[0:cx0nsiqs81fhrfb82pcq075kgs6cybr86v9vdv8vbcgu91y2nthg]2021-07-15T20:11:32Z INF +-------------------------------------------------------------+2021-07-15T20:11:32Z INF |  Your free tunnel has started! Visit it:                    |2021-07-15T20:11:32Z INF |    https://seasonal-deck-organisms-sf.trycloudflare.com     |2021-07-15T20:11:32Z INF +-------------------------------------------------------------+</span>
</code></pre>
<p>In this example, the randomly-generated URL <a target="_blank" href="https://seasonal-deck-organisms-sf.trycloudflare.com"><code>https://seasonal-deck-organisms-sf.trycloudflare.com</code></a> has been created and assigned to your tunn<a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-react-site/">el in</a>stanc<a target="_blank" href="https://developers.cloudflare.com/pages/framework-guides/deploy-a-svelte-site/">e. Vis</a>iting this URL in a browser will show the application running, with requests being securely forwarded through Cloudflare’s global network, through the tunnel running on your machine, to <a target="_blank" href="http://localhost:3000"><code>localhost:3000</code></a>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728822738488/6b8df491-3a15-4b02-b133-dcef26d18a05.png" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728823262355/d20b5d5c-88c3-49e2-8f01-0ec623ac07ca.png" alt class="image--center mx-auto" /></p>
<h1 id="heading-create-a-remotely-managed-tunnel-dashboard"><strong>Create a remotely-managed tunnel (dashboard)</strong></h1>
<p>Follow this step-by-step guide to get your first tunnel up and running using Zero Trust.</p>
<h2 id="heading-prerequisites"><strong>Prerequisites</strong></h2>
<p>Before you start, make sure you:</p>
<ul>
<li><p><a target="_blank" href="https://developers.cloudflare.com/fundamentals/setup/manage-domains/add-site/">Add a website to Cloudflare</a>.</p>
</li>
<li><p><a target="_blank" href="https://developers.cloudflare.com/dns/zone-setups/full-setup/setup/">Change your domain nameservers to Cloudflare</a>.</p>
</li>
</ul>
<h2 id="heading-1-create-a-tunnel"><strong>1. Create a tunnel</strong></h2>
<ol>
<li><p>Log in to <a target="_blank" href="https://one.dash.cloudflare.com/">Zero Trust ↗</a> and go to <strong>Networks</strong> &gt; <strong>Tunnels</strong>.</p>
</li>
<li><p>Select <strong>Create a tunnel</strong>.</p>
</li>
<li><p>Choose <strong>Cloudflared</strong> for the connector type and select <strong>Next</strong>.</p>
</li>
<li><p>Enter a name for your tunnel. We suggest choosing a name that reflects the type of resources you want to connect through this tunnel (for example, <code>enterprise-VPC-01</code>).</p>
</li>
<li><p>Select <strong>Save tunnel</strong>.</p>
</li>
<li><p>Next, you will need to install <code>cloudflared</code> and run it. To do so, check that the environment under <strong>Choose an environment</strong> reflects the operating system on your machine, then copy the command in the box below and paste it into a terminal window. Run the command.</p>
</li>
<li><p>Once the command has finished running, your connector will appear in Zero Trust.</p>
<p> <img src="https://developers.cloudflare.com/_astro/connector.DgDJjokf_IrBTB.webp" alt="Connector appearing in the UI after cloudflared has run" /></p>
</li>
<li><p>Select <strong>Next</strong>.</p>
</li>
</ol>
<p>The next steps depend on whether you want to <a target="_blank" href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-remote-tunnel/#2-connect-an-application">connect an application</a> or <a target="_blank" href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-remote-tunnel/#3-connect-a-network">connect a network</a>.</p>
<h2 id="heading-2-connect-an-application"><strong>2. Connect an application</strong></h2>
<p>Follow these steps to connect an application through your tunnel. If you are looking to connect a network, skip to the <a target="_blank" href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-remote-tunnel/#3-connect-a-network">Connect a network section</a>.</p>
<ol>
<li><p>In the <strong>Public Hostnames</strong> tab, choose a <strong>Domain</strong> and specify any subdomain or path information.</p>
</li>
<li><p>Specify a service, for example <code>https://localhost:8000</code>.</p>
</li>
<li><p>Under <strong>Additional application settings</strong>, specify any <a target="_blank" href="https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/configure-tunnels/origin-configuration/">parameters</a> you would like to add to your tunnel configuration.</p>
</li>
<li><p>Select <strong>Save tunnel</strong>.</p>
</li>
</ol>
<h2 id="heading-3-connect-a-network"><strong>3. Connect a network</strong></h2>
<p>Follow these steps to connect a private network through your tunnel.</p>
<ol>
<li><p>In the <strong>Private Networks</strong> tab, add an IP or CIDR.</p>
</li>
<li><p>Select <strong>Save tunnel</strong>.</p>
</li>
</ol>
<h2 id="heading-4-view-your-tunnel"><strong>4. View your tunnel</strong></h2>
<p>After saving the tunnel, you will be redirected to the <strong>Tunnels</strong> page. Look for your new tunnel to be listed along with its active connector.</p>
<p><img src="https://developers.cloudflare.com/_astro/tunnel-table.D9VVGgDD_qjaM8.webp" alt="Tunnel appearing in the Tunnels table" /></p>
]]></content:encoded></item></channel></rss>