DZone Research Report: A look at our developer audience, their tech stacks, and topics and tools they're exploring.
Getting Started With Large Language Models: A guide for both novices and seasoned practitioners to unlock the power of language models.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
AI for Web Devs: Faster Responses With HTTP Streaming
How To Build a Low-Latency Video Streaming App With ScyllaDB NoSQL and NextJS
Welcome to the exciting world of React Redux, a game-changing JavaScript library designed to manage application state efficiently. Familiarity and proficiency with React Redux have become essential for many contemporary web developers, given its integral role in creating robust, performant applications. This article unravels the mechanisms and principles of React Redux, exploring its origins and its crucial role in enhancing JavaScript applications. The discussions extend from introducing the fundamentals to disbursing the intricacies of the Redux Store, Actions, Reducers, and Middlewares. Embark on this informative expedition to comprehend how React Redux serves as an invaluable toolset for building dynamic, user-interactive interfaces. Fundamentals of React Redux Understanding the Power of React Redux in Today’s Tech Landscape The pace of technology evolution is breathtaking, with new frameworks and libraries launching every day that completely transform the developer landscape. One such technology, a combination of two open-source JavaScript libraries known as React Redux, has unequivocally become the bellwether in state management solutions for modern web applications. React was initially released by Facebook in 2013 and provides a base framework for developers to build complex and interactive user interfaces. Although powerful in terms of interface development, it doesn’t include any built-in architecture to handle the application state. Enter Redux, offering the missing piece in the puzzle and significantly enhancing React’s capabilities by managing application state at scale and seamlessly integrating with it. Redux was inspired by Facebook’s Flux and functional programming language Elm, created to manage state in a more predictable manner. State refers to persisting data that dictates the behavior of an app at any given point. Redux stores the entire app’s state in an immutable tree, which makes it much easier to manage, track, and manipulate in large applications. Redux ensures simplicity, predictability, and consistency in working with data. The libraries adopt unidirectional data flow, meaning the data maintains a one-way stream, reducing the complexity of tracking changes in large-scale apps and making debugging a less daunting task. However, it’s crucial to note that Redux isn’t for every project. Its value comes to the fore when dealing with considerable state management, avoiding unneeded complexity in smaller applications. React Redux combines the robust interface development of React and the state management prowess of Redux, simplifying the process of building complex apps. Their union allows the use of functional programming inside a JavaScript app, where React handles the view, and Redux manages the data. Get best out of React Redux through its ecosystem and libraries such as Redux Toolkit and Redux Saga. The Redux Toolkit simplifies Redux usage with utilities to reduce boilerplate code, and Redux Saga manages side effects in a better and readable manner. The secret to why React Redux thrives in the tech world lies in its maintainability, scalability, and developer experience. Centralized and predictable state management opens the door to powerful developer tools, async logic handling, breaking down UI into easily testable parts, and caching of data. These features have attracted a vast community of developers and organizations, nurturing its growth and development. React Redux sharpens the edge of tech developments through quick prototyping, enhanced performance, and easing the load of dealing with complex state manipulations. In a dynamic tech environment, it shines as a reliable, scalable, and efficient choice for developers worldwide. As technological advancements show no sign of slowing, understanding tools like React Redux becomes critical, and harnessing its potential will maintain a productive and efficient development flow. To any tech enthusiast devoted to solutions that automate and maximize productivity, this should sound like music to the ears! Unquestionably, React Redux plays an essential role in understanding how today’s technology ecosystems interact and function. Understanding Redux Store Branching out from the comprehensive understanding of React and Redux, let’s delve into the specifics of Redux Store and its pivotal role in web application development. It’s not an overstatement to say that Redux Store is the beating heart of every Redux application. It houses the entire state of the application, and understanding how to manage it is paramount to mastering Redux. Redux Store is effectively the state container; it’s where the state of your application stays, and all changes flow through. No doubt, this centralized store holds immense importance, but there’s something more compelling about it – Redux Store is read-only. Yes, you read it right! The state cannot be directly mutated. This strict read-only pattern ensures predictability by imposing a straightforward data flow and making state updates traceable and easy to comprehend. One might wonder, if not by direct mutation, how does the state update happen in a Redux Store? This is where the power of actions and reducers steps in. The only method to trigger state changes is to dispatch an action – an object describing what happened. To specify how the state tree transforms in light of these actions, reducers are designated. Reducers are pure functions that compute the new state based on the previous state and the action dispatched. Redux Store leverages three fundamental functions: dispatch(), getState(), and subscribe(). Dispatch() method dispenses actions to the store. getState() retrieves the current state of the Redux Store. Subscribe() registers a callback function that the Redux Store will call any time an action has been dispatched to ensure updates in UI components. What makes Redux store a real game-changer is its contribution to predictability and debugging ease. The immutability premise ensures every state change leaves a trace, enabling the usage of tools like the Redux DevTools for creating user action logs. Think of it like a CCTV system for your state changes; you can literally see where, when, and how your state changed. This is a huge selling point for developers working as a team on large-scale applications. Moreover, it’s hard not to mention how Redux Store impacts the scalability factor. In a large-scale application with multiple components, direct state management can turn into a nightmare. Redux Store acts as the single source of truth which simplifies the communication between components and further brings in structure and organization. This makes your application highly scalable, maintainable and equally important, more robust towards bugs. In conclusion, the Redux Store absolutely embodies the essence of Redux. It brings out the predictability, maintainability, and ease of debugging in your applications. Having a solid understanding of Redux Store transfers you into the dominant quadrant in the tech devotee’s arena, adequately preparing you for the complexities involved in high-scale application development. Remember, mastery of modern technologies like Redux brings you one step closer to the goal of a flawless user experience. And isn’t that what we all aim for? Action and Reducers in Redux Diving into the heart of Redux, we’ll now explore the key players that bring Redux to life – Actions and Reducers. If you’re keen on optimizing user interface and improvising data flow in your projects, understanding these two pillars of Redux can unlock possibilities for more efficient and interactive web applications. In Redux, Actions are payloads of information that send data from your application to the Redux Store. They play an integral role in triggering changes to the application’s state. A defining feature of Actions is that they are the only source of information for the Store. As they must be plain objects, it enables consistency, promoting easier testing and improved debugging procedures. Every action carries with them the ‘type’ property, which defines the nature or intent of the action. The type property drives the workflow and helps the Redux Store determine what transformations or updates are needed. More complex Actions might also include a ‘payload’ field, carrying additional information for the state change. Transitioning now to Reducers, they are the fundamental building blocks that define how state transitions happen in a Redux application. They take in the current state and an action and return to the new state, thus forming the core of Redux. It’s crucial to note that Reducers are pure functions, implying the output solely depends on the state and action input, and no side effects like network or database calls are executed. In practice, developers often split a single monolithic Reducer into smaller Reducer functions, each handling separate data slices. It boosts maintainability by keeping functions small and aids in better organization by grouping similar tasks together. The operational flow between Actions and Reducers is thus: an Action describes a change, and a Reducer takes in that action and evolves the state accordingly. The dispatcher function ties in this handshake by effectively bridging Actions and Reducers. A dispatched action is sent to all the Reducers in the Store, and based on the action’s type, the appropriate state change occurs. To conclude, Actions and Reducers are the conduits that power the state change in Redux. These two work conjointly, transforming applications into predictable, testable, and easily debuggable systems. They ensure that React Redux remains an indispensable tool for efficient web application development in the modern tech space. Mastering these components unlocks the potential of Redux, making it easier to scale, maintain, and enhance your applications. React Redux Middlewares Transitioning next towards the concept of middlewares in the context of Redux, inherently, a middleware in Redux context serves as a middleman between the dispatching of an action and the moment it reaches the reducer. Middlewares open a new horizon of possibilities when we need to deal with asynchronous actions and provide a convenient spot to put logics that don’t necessarily belong inside a component or even a reducer. Middleware provides a third-party extension point between dispatching an action and the moment it reaches the reducer, setting the stage for monitoring, logging, and intercepting dispatched actions before they hit a reducer. Redux has a built-in applyMiddleware function that we can use when creating our store to bind middleware to it. One of the most common use-cases for middleware is to support asynchronous interactions. Whereas actions need to be plain objects, and reducers only care about the previous and next state, a middleware can interpret actions with a different format, such as functions or promises, time-traveling, crash-reporting, and more. Applied primarily for handling asynchronous actions or for side-effects (API calls), Redux middleware libraries like Redux Thunk and Redux Saga lead the way here. Redux Thunk, for instance, allows you to write action creators that return a function rather than an action, extending the functionality of the Redux dispatch function. When this function gets dispatched, it’s Redux Thunk middleware that notifies Redux to hold up until the called API methods resolve before it gets to the reducer. On the other hand, Redux Saga exploits the ES6 feature generator functions to make asynchronous flows more manageable and efficient. It accomplishes this by pausing the Generator function and executing the async operation; once the async operation is completed, resume the generator function with received data. There is no denying that middleware is the driving force in making APIs work seamlessly with Redux. They can be thought of as an assembly line that prepares the action to get processed by a reducer. They take care of the nitty-gritty details like ordering the way multiple middlewares are applied or how to deal with async operations, ensuring that the Reducers stay pure by only being concerned with calculating the next state. In conclusion, React Redux and its arsenal, including Middleware, make web development a smooth ride. The introduction of middleware as a third-party extension bridging the gap between dispatching an action and the moment it hits the reducer has opened a new vista of opportunities for dealing with complex scenarios in a clean and effective manner. Actions, reducers, and middlewares —together they form a harmonious trinity that powers high-scale, seamless web development. Building Applications With React Redux Continuing on the journey of dissecting the best practices in React Redux application, let’s now delve into the world of ‘selectors.’ What does a selector do? Simply put, selectors are pure functions that extract and compute derived data from the Redux store state. In the Redux ecosystem, selectors are leveraged to encapsulate the state structure and add a protective shield, abstaining other parts of the app from knowing the intricate details. Selectors come in handy in numerous ways. Notably, they shine in enhancing the maintainability of React Redux applications, especially as they evolve and expand over time. As the scope of the application grows, it becomes necessary to reorganize the state shape – which selectors make less daunting. With selectors, achieving this change won’t require editing other parts of the codebase – a win for maintainability. Consider selectors as the ‘knowledge-bearers’ of state layout. It lends them the power to retrieve anything from the Redux state and perform computations and preparations to satisfy components’ requirements. Therefore, selectors are a key component in managing state in Redux applications and preventing needless renders, ultimately optimizing performance. Next on our voyage, consider the ‘Immutable Update Patterns.’ They are best practices for updating state in Redux applications. As Redux relies on immutability to function correctly, following these patterns becomes vital. With a focus on direct data manipulation, the patterns help keep state consistent while keeping the code organized and readable. One important pattern involves making updates in arrays. The use of array spread syntax (…), map, filter, and other array methods enables adhering to immutability when updating arrays. Another relates to updating objects where object spread syntax is commonly employed. Distinct patterns target adding, updating, and removing items in arrays. Familiarizing oneself with these patterns can streamline React Redux development, leading to cleaner and better-structured code. Lastly, let’s touch upon ‘Connecting React and Redux.’ The React Redux library facilitates this connection via two primary methods – ‘Provider’ and ‘connect.’ With ‘Provider,’ the Redux store becomes accessible to the rest of the app. It employs the Context API under the hood to make this happen. Meanwhile, ‘connect’ caters to the process of making individual components ‘aware’ of the Redux store. It fetches the necessary state values from the store, dispatches actions to the store, and injects these as props into the components. Therefore, the ‘connect’ function fosters the interaction between React components and the Redux Store, helping to automate state management effectively. React and Redux prove to be a formidable combination in creating dynamic web applications. From state management to the convenience of selectors, immutable update patterns, middleware, Redux Store, actions, reducers, and the ability to seamlessly connect React with Redux – the use of React Redux indeed brings a compelling capacity to streamline web application development. It underlines the central role technology plays in problem-solving, especially where efficiency, scalability, and maintainability are crucial. By mastering these concepts, web developers can find their React Redux journey smoother than ever before. Having delved deep into the world of React Redux, we now understand the impact it has on streamlining complex codes and boosting application efficiency. From the innovative concept of a Redux Store holding the application state to the dance of actions and reducers that update these states, React Redux revolutionizes state management. We’ve also gleaned the power of middleware functions, which are crucial in dealing with asynchronous actions and managing logs. Finally, all these theoretical insights have culminated in the real-world implementation of building applications with this versatile JavaScript library. It’s clear that when it comes to state management in web application development, React Redux stands as a robust, go-to solution. Here’s to our continued exploration of technology as we chart new pathways, further deepening our understanding and skill in application development.
Learn to build an efficient and proper image storage system with Node.js and MongoDB. Efficiently manage, upload, retrieve, and display images for various applications. Keyword: efficient. Introduction Images have become crucial in numerous fields and sectors in the digital era. Reliable view preservation and access are vital for a smooth user journey in content administration systems, social networking, online commerce, and a variety of related applications. A NoSQL database called MongoDB and the well-known JavaScript engine Node.js can work well together to create a clever picture repository. You will examine the design and build of a Node.js API for smart picture archiving using MongoDB as the backend in the following article. Beyond saving and retrieving pictures, efficient image storage involves adding intellect into the system to execute operations like image categorizing, seeking, and shipment improvement. The flexible schema of MongoDB makes it an outstanding database for this purpose. Its Node.js, which is widely recognized for its speed and scalability, is a great fit for developing the API. When combined, they offer an affordable method for organizing and storing images. Setting up the Environment Set up our working space first prior to going into the code. The computer has both MongoDB and Node.js configured. To develop and test the code, an Integrated Development Environment (IDE) or editor for texts is likewise recommended. If you wish to generate a new Node.js project, browse the project subdirectory you kind: compared. Follow the prompts to create a `package.json` file with the project's metadata. Next, install the necessary dependencies: Certainly! Let's dive into the details of each of these components: Express Express is a favored Node.js web application framework. It is a full-of-function Node.js web application framework that is simple to use and adaptable, offering a wide range of capabilities for both web and mobile applications. Express offers a wide range of characteristics and instruments for managing HTTP requests and responses, routing, middleware management, as well as additional responsibilities, making the development of web applications quicker. Some of Express's salient features are: Routing Express lets you set up routes for your application so you can tell it how to react to various HTTP requests (GET, POST, PUT, DELETE, for example). Middleware A variety of responsibilities, including request processing, error oversight, logging, and authentication, can be carried out by middleware components. You are able to use Express's broad middleware ecosystem in your own application. Template Machines For producing HTML content on the server automatically, you can use engines for templates such as EJS or Pug with Express. JSON Parsing Express enables working with REST-based application programming interfaces by seamlessly parsing incoming data containing JSON. Error Handling Custom error controllers are one of the techniques that Express offers to professionally control difficulties. Because of its basic nature and diversity, Express is used a lot in the Node.js market to develop website hosting and Applications. Mongoose For MongoDB, a NoSQL database, Mongoose is an Object database modeling (ODM) library. Information is stored by MongoDB in a dynamic JSON-like format that is called BSON (Binary JSON). With Mongoose, working with MongoDB has been organized because information models as rules are defined identically to how they work in traditional relational database systems. Schema Definition Mongoose's schema system allows you to define data models and how they are efficiently organized. This represents a few of its key features. This makes it possible for developers to apply specific rules to the information you provide, such as kinds of information and evaluation guidelines. CRUD Operations By allowing simple ways to create, access, modify, and clear entities in MongoDB, MongoDB accelerates database interactions. Middleware Mongoose, like Express, has gate methods the fact that can be employed to issue orders either before or after specific database transactions. Data Validation Mongoose permits you to create rules for your data to make sure it meets the structure you have set. When Mongo is the database option of choice for Nodes.js applications, MongoDB can often be used to provide enhanced organization and put together conversations between developers and MongoDB. Multer For managing file uploads in Node.js applications in Prilient Technologies, Multer is a middleware. While processing and storing files supplied over HTTP forms, notably the upload of files in web-based programs, it is typically utilized combined with Express. Multer delivers alternatives for managing and keeping files, and it optimizes the file upload procedure. Multer's key qualities are: File Upload Handling Multer contains the capability to manage client requests and file uploads and authorize access to the gave-up files on the server. Configuration Multer is capable of being configured to store uploaded files in a certain location, consent to certain file kinds, and allow certain file renaming operations. Middleware Integration File upload features may be easily added to your web apps through Multer's flawless interface with Express. Multer is useful in cases where users must handle user-uploaded files, such as image uploads, document attachments, and more. Project Structure Let's start by creating the project structure. Here is a high-level overview of the structure: JavaScript image-storage-node.js/ │ ├── node_modules/ # Dependencies ├── uploads/ # Image uploads directory ├── app.js # Main application file ├── package.json # Project dependencies and scripts ├── package-lock.json # Dependency versions ├── routes/ # API routes │ ├── images.js # Image-related routes └── models/ # MongoDB schema ├── image.js # Image schema Designing the Image Storage System The various parts of our adaptive picture repository shall be listed below: Express.js and server: This Nodes.js server acts as a protocol end, enabling processing photos and controlling HTTP requests. MongoDB database: Used for conserving info about photos, such as file points, user data and keywords storage device images. Multer middleware: To oversee and archive image transfers to the server itself. Implementing the Node.js API Let's start by implementing the Node.js API. Create a JavaScript file, e.g., `app.js`, and set up the basic structure of your Express.js server. This code puts and creates your Express.js server, opens your local MongoDB database known as "image-storage," and uses Mongoose for setting up the image the schema storage images. The following illustrates the primary traits and regard found in the source code linked above: Express: For establishing a virtual server, import the Express.js framework. App: The Express usage is set up as an instance. CORS: Cross-Origin Resource Sharing middleware that gives access to the API from multiple domains. Body Parse: JSON data parsing middleware for requests. Image Routes: The importation of goods and the routes to handle API endpoints that involve images. app.use: Express the middleware functions to parse JSON data in requests and enable CORS. app.listen: Opens port 3000 for server startup. Handling Image Uploads We plan to use the Multer middleware for dealing with image uploads. Set all a POST route and storage configurations for photos. This code sets 5MB as the maximum file size and sets Multer to store the uploaded files in memory. The photograph's details are saved to MongoDB by the system upon receiving a POST request from the client to `/upload}. Retrieving and Displaying Images Now, let's implement endpoints for retrieving and displaying images. The /images endpoint retrieves a list of image metadata, and the /images/id endpoint fetches and serves the image file. Conclusion We've studied the basic structure and setup of a Node.js and MongoDB-powered effective picture storage system in this blog post. The environment installed, system layout, and implementation of the uploading, you can collect and assign images have been accurate. Still, this is only the beginning of what can be done with an intelligent picture storage device. By including performs like search, picture resizing, and savvy image evaluation, you might enhance it even more. The integration from Node.js and MongoDB delivers an impressive foundation on which to build intelligent and flexible image storage techniques that comply with the diverse requirements of today's use cases.
Welcome back to the series where we are learning how to integrate AI products into web applications: Intro & Setup Your First AI Prompt Streaming Responses How Does AI Work Prompt Engineering AI-Generated Images Security & Reliability Deploying Last time, we got all the boilerplate work out of the way. In this post, we’ll learn how to integrate OpenAI’s API responses into our Qwik app using fetch. We’ll want to make sure we’re not leaking API keys by executing these HTTP requests from a backend. By the end of this post, we will have a rudimentary, but working AI application. Generate OpenAI API Key Before we start building anything, you’ll need to go to platform.openai.com/account/api-keys and generate an API key to use in your application. Make sure to keep a copy of it somewhere because you will only be able to see it once. With your API key, you’ll be able to make authenticated HTTP requests to OpenAI. So it’s a good idea to get familiar with the API itself. I’d encourage you to take a brief look through the OpenAI Documentation and become familiar with some concepts. The models are particularly good to understand because they have varying capabilities. If you would like to familiarize yourself with the API endpoints, expected payloads, and return values, check out the OpenAI API Reference. It also contains helpful examples. You may notice the JavaScript package available on NPM called openai. We will not be using this, as it doesn’t quite support some things we’ll want to do, that fetch can. Make Your First HTTP Request The application we’re going to build will make an AI-generated text completion based on the user input. For that, we’ll want to work with the chat endpoint (note that the completions endpoint is deprecated). We need to make a POST request to https://api.openai.com/v1/chat/completions with the 'Content-Type' header set to 'application/json', the 'Authorization' set to 'Bearer OPENAI_API_KEY' (you’ll need to replace OPENAI_API_KEY with your API key), and the body set to a JSON string containing the GPT model to use (we’ll use gpt-3.5-turbo) and an array of messages: JavaScript fetch('https://api.openai.com/v1/chat/completions', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer OPENAI_API_KEY' }, body: JSON.stringify({ 'model': 'gpt-3.5-turbo', 'messages': [ { 'role': 'user', 'content': 'Tell me a funny joke' } ] }) }) You can run this right from your browser console and see the request in the Network tab of your dev tools. The response should be a JSON object with a bunch of properties, but the one we’re most interested in is the "choices". It will be an array of text completions objects. The first one should be an object with an "message" object that has a "content" property with the chat completion. JSON { "id": "chatcmpl-7q63Hd9pCPxY3H4pW67f1BPSmJs2u", "object": "chat.completion", "created": 1692650675, "model": "gpt-3.5-turbo-0613", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Why don't scientists trust atoms?\n\nBecause they make up everything!" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 12, "completion_tokens": 13, "total_tokens": 25 } } Congrats! Now you can request a mediocre joke whenever you want. Build the Form The fetch the request above is fine, but it’s not quite an application. What we want is something a user can interact with to generate an HTTP request like the one above. For that, we’ll probably want some sort to start with an HTML <form> containing a <textarea>. Below is the minimum markup we need: HTML <form> <label for="prompt">Prompt</label> <textarea id="prompt" name="prompt"></textarea> <button>Tell me</button> </form> We can copy and paste this form right inside our Qwik component’s JSX template. If you’ve worked with JSX in the past, you may be used to replacing the for attribute <label> with htmlFor, but Qwik’s compiler doesn’t require us to do that, so it’s fine as is. Next, we’ll want to replace the default form submission behavior. By default, when an HTML form is submitted, the browser will create an HTTP request by loading the URL provided in the form’s action attribute. If none is provided, it will use the current URL. We want to avoid this page load and use JavaScript instead. If you’ve done this before, you may be familiar with the preventDefault method on the Event interface. As the name suggests, it prevents the default behavior for the event. There’s a challenge here due to how Qwik deals with event handlers. Unlike other frameworks, Qwik does not download all the JavaScript logic for the application upon the first-page load. Instead, it has a very thin client that intercepts user interactions and downloads the JavaScript event handlers on-demand. This asynchronous nature makes Qwik applications much faster to load but introduces the challenge of dealing with event handlers asynchronously. It makes it impossible to prevent the default behavior the same way as synchronous event handlers that are downloaded and parsed before the user interactions. Fortunately, Qwik provides a way to prevent the default behavior by adding preventdefault:{eventName} to the HTML tag. A very basic form example may look something like this: JavaScript import { component$ } from '@builder.io/qwik'; export default component$(() => { return ( <form preventdefault:submit onSubmit$={(event) => { console.log(event) } > <!-- form contents --> </form> ) }) Did you notice that little $ at the end of the onSubmit$ handler, there? Keep an eye out for those, because they are usually a hint to the developer that Qwik’s compiler is going to do something funny and transform the code. In this case, it’s due to the lazy-loading event handling system I mentioned above. Incorporate the Fetch Request Now we have the tools in place to replace the default form submission with the fetch request we created above. What we want to do next is pull the data from the <textarea> into the body of the fetch request. We can do so with, which expects a form element as an argument and provides an API to access a form control values through the control’s name attribute. We can access the form element from the event’s target property, use it to create a new FormData object, and use that to get the <textarea> value by referencing its name, “prompt”. Plug that into the body of the fetch request we wrote above, and you might get something that looks like this: JavaScript export default component$(() => { return ( <form preventdefault:submit onSubmit$={(event) => { const form = event.target const formData = new FormData(form) const prompt = formData.get('prompt') const body = { 'model': 'gpt-3.5-turbo', 'messages': [{ 'role': 'user', 'content': prompt }] } fetch('https://api.openai.com/v1/chat/completions', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer OPENAI_API_KEY' }, body: JSON.stringify(body) }) } > <!-- form contents --> </form> ) }) In theory, you should now have a form on your page that, when submitted, sends the value from the text to the OpenAI API. Protect Your API Keys Although our HTTP request is working, there’s a glaring issue. Because it’s being constructed on the client side, anyone can open the browser dev tools and inspect the properties of the request. This includes the Authorization header containing our API keys. Inspect properties of the request I’ve blocked out my API token here with a red bar. This would allow someone to steal our API tokens and make requests on our behalf, which could lead to abuse or higher charges on our account. Not good!!! The best way to prevent this is to move this API call to a backend server that we control that would work as a proxy. The frontend can make an unauthenticated request to the backend, and the backend would make the authenticated request to OpenAI and return the response to the frontend. However because users can’t inspect backend processes, they would not be able to see the Authentication header. So how do we move the fetch request to the backend? I’m so glad you asked! We’ve been mostly focusing on building the front end with Qwik, the framework, but we also have access to Qwik City, the full-stack meta-framework with tooling for file-based routing, route middleware, HTTP endpoints, and more. Of the various options Qwik City offers for running backend logic, my favorite is routeAction$. It allows us to create a backend function triggered by the client over HTTP (essentially an RPC endpoint). The logic would follow: Use routeAction$() to create an action. Provide the backend logic as the parameter. Programmatically execute the action’s submit() method. A simplified example could be: JavaScript import { component$ } from '@builder.io/qwik'; import { routeAction$ } from '@builder.io/qwik-city'; export const useAction = routeAction$((params) => { console.log('action on the server', params) return { o: 'k' } }) export default component$(() => { const action = useAction() return ( <form preventdefault:submit onSubmit$={(event) => { action.submit('data') } > <!-- form contents --> </form> { JSON.stringify(action) } ) }) I included a JSON.stringify(action) at the end of the template because I think you should see what the returned ActionStore looks like. It contains extra information like whether the action is running, what the submission values were, what the response status is, what the returned value is, and more. This is all very useful data that we get out of the box just by using an action, and it allows us to create more robust applications with less work. Enhance the Experience Qwik City's actions are cool, but they get even better when combined with Qwik’s <Form> component: Under the hood, the component uses a native HTML element, so it will work without JavaScript. When JS is enabled, the component will intercept the form submission and trigger the action in SPA mode, allowing to have a full SPA experience. By replacing the HTML <form> element with Qwik’s <Form> component, we no longer have to set up preventdefault:submit, onSubmit$, or call action.submit(). We can just pass the action to the action prop and it’ll take care of the work for us. Additionally, it will work if JavaScript is not available for some reason (we could have done this with the HTML version as well, but it would have been more work). JavaScript import { component$ } from '@builder.io/qwik'; import { routeAction$, Form } from '@builder.io/qwik-city'; export const useAction = routeAction$(() => { console.log('action on the server') return { o: 'k' } }); export default component$(() => { const action = useAction() return ( <Form action={action}> <!-- form contents --> </Form> ) }) So that’s an improvement for the developer experience. Let’s also improve the user experience. Within the ActionStore, we have access to the isRunning data which keeps track of whether the request is pending or not. It’s handy information we can use to let the user know when the request is in flight. We can do so by modifying the text of the submit button to say “Tell me” when it’s idle, then “One sec…” while it’s loading. I also like to assign the aria-disabled attribute to match the isRunning state. This will hint to assistive technology that it’s not ready to be clicked (though technically still can be). It can also be targeted with CSS to provide visual styles suggesting it’s not quite ready to be clicked again. HTML <button type="submit" aria-disabled={state.isLoading}> {state.isLoading ? 'One sec...' : 'Tell me'} </button> Show the Results Ok, we’ve done way too much work without actually seeing the results on the page. It’s time to change that. Let’s bring the fetch request we prototyped earlier in the browser into our application. We can copy/paste the fetch code right into the body of our action handler, but to access the user’s input data, we’ll need access to the form data that is submitted. Fortunately, any data passed to the action.submit() method will be available to the action handler as the first parameter. It will be a serialized object where the keys correspond to the form control names. Note that I’ll be using the await keyword in the body of the handler, which means I also have to tag the handler as an async function. JavaScript import { component$ } from '@builder.io/qwik'; import { routeAction$, Form } from '@builder.io/qwik-city'; export const useAction = routeAction$(async (formData) => { const prompt = formData.prompt // From <textarea name="prompt"> const body = { 'model': 'gpt-3.5-turbo', 'messages': [{ 'role': 'user', 'content': prompt }] } const response = await fetch('https://api.openai.com/v1/chat/completions', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer OPENAI_API_KEY' }, body: JSON.stringify(body) }) const data = await response.json() return data.choices[0].message.content }) At the end of the action handler, we also want to return some data for the front end. The OpenAI response comes back as JSON, but I think we might as well just return the text. If you remember from the response object we saw above, that data is located at responseBody.choices[0].message.content. If we set things up correctly, we should be able to access the action handler’s response in the ActionStore‘s value property. This means we can conditionally render it somewhere in the template like so: JavaScript {action.value && ( <p>{action.value}</p> )} Use Environment Variables Alright, we’ve moved the OpenAI request to the backend, and protected our API keys from prying eyes, we’re getting a (mediocre joke) response, and displaying it on the front end. The app is working, but there’s still one more security issue to deal with. It’s generally a bad idea to hardcode API keys into your source code, for some reasons: It means you can’t share the repo publicly without exposing your keys. You may run up API usage during development, testing, and staging. Changing API keys requires code changes and re-deploys. You’ll need to regenerate API keys anytime someone leaves the org. A better system is to use environment variables. With environment variables, you can provide the API keys only to the systems and users that need access to them. For example, you can make an environment variable called OPENAI_API_KEY with the value of your OpenAI key for only the production environment. This way, only developers with direct access to that environment would be able to access it. This greatly reduces the likelihood of the API keys leaking, it makes it easier to share your code openly, and because you are limiting access to the keys to the least number of people, you don’t need to replace keys as often because someone left the company. In Node.js, it’s common to set environment variables from the command line (ENV_VAR=example npm start) or with the popular dotenv package. Then, in your server-side code, you can access environment variables using process.env.ENV_VAR. Things work slightly differently with Qwik. Qwik can target different JavaScript runtimes (not just Node), and accessing environment variables via process.env is a Node-specific concept. To make things more runtime-agnostic, Qwik provides access to environment variables through an RequestEvent object which is available as the second parameter to the route action handler function. JavaScript import { routeAction$ } from '@builder.io/qwik-city'; export const useAction = routeAction$((param, requestEvent) => { const envVariableValue = requestEvent.env.get('ENV_VARIABLE_NAME') console.log(envVariableValue) return {} }) So that’s how we access environment variables, but how do we set them? Unfortunately, for production environments, setting environment variables will differ depending on the platform. For a standard server VPS, you can still set them with the terminal as you would in Node (ENV_VAR=example npm start). In development, we can alternatively create a local.env file containing our environment variables, and they will be automatically assigned to us. This is convenient since we spend a lot more time starting the development environment, and it means we can provide the appropriate API keys only to the people who need them. So after you create a local.env file, you can assign the OPENAI_API_KEY variable to your API key. YAML OPENAI_API_KEY="your-api-key" (You may need to restart your dev server) Then we can access the environment variable through the RequestEvent parameter. With that, we can replace the hard-coded value in our fetch request’s Authorization header with the variable using Template Literals. JavaScript export const usePromptAction = routeAction$(async (formData, requestEvent) => { const OPENAI_API_KEY = requestEvent.env.get('OPENAI_API_KEY') const prompt = formData.prompt const body = { model: 'gpt-3.5-turbo', messages: [{ role: 'user', content: prompt }] } const response = await fetch('https://api.openai.com/v1/chat/completions', { method: 'post', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${OPENAI_API_KEY}`, }, body: JSON.stringify(body) }) const data = await response.json() return data.choices[0].message.content }) For more details on environment variables in Qwik, see their documentation. Summary When a user submits the form, the default behavior is intercepted by Qwik’s optimizer which lazy loads the event handler. The event handler uses JavaScript to create an HTTP request containing the form data to send to the server to be handled by the route’s action. The route’s action handler will have access to the form data in the first parameter and can access environment variables from the second parameter (a RequestEvent object). Inside the route’s action handler, we can construct and send the HTTP request to OpenAI using the data we got from the form and the API keys we pulled from the environment variables. With the OpenAI response, we can prepare the data to send back to the client. The client receives the response from the action and can update the page accordingly. Here’s what my final component looks like, including some Tailwind classes and a slightly different template. JavaScript import { component$ } from "@builder.io/qwik"; import { routeAction$, Form } from "@builder.io/qwik-city"; export const usePromptAction = routeAction$(async (formData, requestEvent) => { const OPENAI_API_KEY = requestEvent.env.get('OPENAI_API_KEY') const prompt = formData.prompt const body = { model: 'gpt-3.5-turbo', messages: [{ role: 'user', content: prompt }] } const response = await fetch('https://api.openai.com/v1/chat/completions', { method: 'post', headers: { 'Content-Type': 'application/json', Authorization: `Bearer ${OPENAI_API_KEY}`, }, body: JSON.stringify(body) }) const data = await response.json() return data.choices[0].message.content }) export default component$(() => { const action = usePromptAction() return ( <main class="max-w-4xl mx-auto p-4"> <h1 class="text-4xl">Hi</h1> <Form action={action} class="grid gap-4"> <div> <label for="prompt">Prompt</label> <textarea name="prompt" id="prompt"> Tell me a joke </textarea> </div> <div> <button type="submit" aria-disabled={action.isRunning}> {action.isRunning ? 'One sec...' : 'Tell me'} </button> </div> </Form> {action.value && ( <article class="mt-4 border border-2 rounded-lg p-4 bg-[canvas]"> <p>{action.value}</p> </article> )} </main> ); }); Conclusion All right! We’ve gone from a script that uses AI to get mediocre jokes to a full-blown application that securely makes HTTP requests to a backend that uses AI to get mediocre jokes and sends them back to the front end to put those mediocre jokes on a page. You should feel pretty good about yourself. But not too good, because there’s still room to improve. In our application, we are sending a request and getting an AI response, but we are waiting for the entirety of the body of that response to be generated before showing it to the users. These AI responses can take a while to complete. If you’ve used AI chat tools in the past, you may be familiar with the experience where it looks like it’s typing the responses to you, one word at a time, as they’re being generated. This doesn’t speed up the total request time, but it does get some information back to the user much sooner and feels like a faster experience. In the next post, we’ll learn how to build that same feature using HTTP streams, which are fascinating and powerful but also can be kind of confusing. So I’m going to dedicate an entire post just to that. I hope you’re enjoying this series and plan to stick around. In the meantime, have fun generating some mediocre jokes. Thank you so much for reading. If you liked this article, and want to support me, the best ways to do so are to share it and follow me on Twitter.
In the dynamic world of web development, Single Page Applications (SPAs) and frameworks like React, Angular, and Vue.js have emerged as the preferred approach for delivering seamless user experiences. With the evolution of the Kotlin language and its recent multiplatform capabilities, new options exist that are worthwhile to evaluate. In this article, we will explore Kotlin/JS for creating a web application that communicates with a Spring Boot backend which is also written in Kotlin. In order to keep it as simple as possible, we will not bring in any other framework. Advantages of Kotlin/JS for SPA Development As described in the official documentation, Kotlin/JS provides the ability to transpile Kotlin code, the Kotlin standard library, and any compatible dependencies to JavaScript (ES5). With Kotlin/JS we can manipulate the DOM and create dynamic HTML by taking advantage of Kotlin's conciseness and expressiveness, coupled with its compatibility with JavaScript. And of course, we do have the much needed type-safety, which reduces the likelihood of runtime errors. This enables developers to write client-side code with reduced boilerplate and fewer errors. Additionally, Kotlin/JS seamlessly integrates with popular JavaScript libraries (and frameworks), thus leveraging the extensive ecosystem of existing tools and resources. And, last but not least: this makes it easier for a backend developer to be involved with the frontend part as it looks more familiar. Moderate knowledge of "vanilla" JavaScript, the DOM, and HTML is of course needed; but especially when we are dealing with non-intensive apps (admin panels, back-office sites, etc.), one can get engaged rather smoothly. Sample Project The complete source code for this showcase is available on GitHub. The backend utilizes Spring Security for protecting a simple RESTful API with basic CRUD operations. We won't expand more on this since we want to keep the spotlight on the frontend part which demonstrates the following: Log in with username/password Cookie-based session Page layout with multiple tabs and top navigation bar (based on Bootstrap) Client-side routing (based on Navigo) Table with pagination, sorting, and filtering populated with data fetched from the backend (based on DataTables) Basic form with input fields including (dependent) drop-down lists (based on Bootstrap) Modals and loading masks (based on Bootstrap and spin.js) Usage of sessionStorage and localStorage Usage of Ktor HttpClient for making HTTP calls to the backend An architectural overview is provided in the diagram below: Starting Point The easiest way to start exploring is by creating a new Kotlin Multiplatform project from IntelliJ. The project's template must be "Full-Stack Web Application": This will create the following project structure: springMain: This is the module containing the server-side implementation. springTest: For the Spring Boot tests commonMain: This module contains "shared" code between the frontend and the backend; e.g., DTOs commonTest: For the unit tests of the "common" module jsMain: This is the frontend module responsible for our SPA. jsTest: For the Kotlin/JS tests The sample project on GitHub is based on this particular skeleton. Once you clone the project you may start the backend by executing: $ ./gradlew bootRun This will spin up the SpringBoot app, listening on port: 8090. In order to start the frontend, execute: $ ./gradlew jsBrowserDevelopmentRun -t This will open up a browser window automatically navigating to http://localhost:8080 and presenting the user login page. For convenience, a couple of users are provisioned on the server (have a look at dev.kmandalas.demo.config.SecurityConfig for details). Once logged in, the user views a group of tabs with the main tab presenting a table (data grid) with items fetched from the server. The user can interact with the table (paging, sorting, filtering, data export) and add a new item (product) by pressing the "Add product" button. In this case, a form is presented within a modal with typical input fields and dependent drop-down lists with data fetched from the server. In fact, there is some caching applied on this part in order to reduce network calls. Finally, from the top navigation bar, the user can toggle the theme (this setting is preserved in the browser's local storage) and perform logout. In the next section, we will explore some low-level details for selected parts of the frontend module. The jsMain Module Let's start by having a look at the structure of the module: The naming of the Kotlin files should give an idea about the responsibility of each class. The "entrypoint" is of course the Main.kt class: Kotlin import home.Layout import kotlinx.browser.window import kotlinx.coroutines.MainScope import kotlinx.coroutines.launch fun main() { MainScope().launch { window.onload = { Layout.init() val router = Router() router.start() } } } Once the "index.html" file is loaded, we initialize the Layout and our client-side Router. Now, the "index.html" imports the JavaScript source files of the things we use (Bootstrap, Navigo, Datatables, etc.) and their corresponding CSS files. And of course, it imports the "transpiled" JavaScript file of our Kotlin/JS application. Apart from this, the HTML body part consists of some static parts like the "Top Navbar," and most importantly, our root HTML div tag. Under this tag, we will perform the DOM manipulations needed for our simple SPA. By importing the kotlinx.browser package in our Kotlin classes and singletons, we have access to top-level objects such as the document and window. The standard library provides typesafe wrappers for the functionality exposed by these objects (wherever possible) as described in the Browser and DOM API. So this is what we do at most parts of the module by writing Kotlin and not JavaScript or using jQuery, and at the same time having type-safety without using, e.g., TypeScript. So for example we can create content like this: Kotlin private fun buildTable(products: List<Product>): HTMLTableElement { val table = document.createElement("table") as HTMLTableElement table.className = "table table-striped table-hover" // Header val thead = table.createTHead() val headerRow = thead.insertRow() headerRow.appendChild(document.createElement("th").apply { textContent = "ID" }) headerRow.appendChild(document.createElement("th").apply { textContent = "Name" }) headerRow.appendChild(document.createElement("th").apply { textContent = "Category" }) headerRow.appendChild(document.createElement("th").apply { textContent = "Price" }) // Body val tbody = table.createTBody() for (product in products) { val row = tbody.insertRow() row.appendChild(document.createElement("td").apply { textContent = product.id.toString() }) row.appendChild(document.createElement("td").apply { textContent = product.name }) row.appendChild(document.createElement("td").apply { textContent = product.category.name }) row.appendChild(document.createElement("td").apply { textContent = product.price.toString() }) } document.getElementById("root")?.appendChild(table) return table } Alternatively, we can use the Typesafe HTML DSL of the kotlinx.html library which looks pretty cool. Or we can load HTML content as "templates" and further process them. Seems that many possibilities exist for this task. Moving on, we can attach event-listeners thus dynamic behavior to our UI elements like this: Kotlin categoryDropdown?.addEventListener("change", { val selectedCategory = categoryDropdown.value // Fetch sub-categories based on the selected category mainScope.launch { populateSubCategories(selectedCategory) } }) Before talking about some "exceptions to the rule", it's worth mentioning that we use the Ktor HTTP client (see ProductApi) for making the REST calls to the backend. We could use the ported Fetch API for this task but going with the client looks way better. Of course, we need to add the ktor-client as a dependency to the build.gradle.kts file: Kotlin val jsMain by getting { dependsOn(commonMain) dependencies { implementation("io.ktor:ktor-client-core:$ktorVersion") implementation("io.ktor:ktor-client-js:$ktorVersion") implementation("io.ktor:ktor-client-content-negotiation:$ktorVersion") //... } } The client includes the JSESSIONID browser cookie received from the server upon successful authentication of the HTTP requests. If this is omitted, we will get back HTTP 401/403 errors from the server. These are also handled and displayed within Bootstrap modals. Also, a very convenient thing regarding the client-server communication is the sharing of common data classes (Product.kt and Category.kt, in our case) between the jsMain and springMain modules. Exception 1: Use Dependencies From npm For client-side routing, we selected the Navigo JavaScript library. This library is not part of Kotlin/JS, but we can import it in Gradle using the npm function: Kotlin val jsMain by getting { dependsOn(commonMain) dependencies { //... implementation(npm("navigo", "8.11.1")) } } However, because JavaScript modules are dynamically typed and Kotlin is statically typed, in order to manipulate Navigo from Kotlin we have to provide an "adapter." This is what we do within the Router.kt class: Kotlin @JsModule("navigo") @JsNonModule external class Navigo(root: String, resolveOptions: ResolveOptions = definedExternally) { fun on(route: String, handler: () -> Unit) fun resolve() fun navigate(s: String) } With this in place, the Navigo JavaScript module can be used just like a regular Kotlin class. Exception 2: Use JavaScript Code From Kotlin It is possible to invoke JavaScript functions from Kotlin code using the js() function. Here are some examples from our example project: Kotlin // From ProductTable.kt: private fun initializeDataTable() { js("new DataTable('#$PRODUCTS_TABLE_ID', $DATATABLE_OPTIONS)") } // From ModalUtil.kt: val modalElement = document.getElementById(modal.id) as? HTMLDivElement modalElement?.let { js("new bootstrap.Modal(it).show()") } However, this should be used with caution since this way we are outside Kotlin's type system. Takeaways In general, the best framework to choose depends on several factors with one of the most important ones being, "The one that the developer team is more familiar with." On the other hand, according to Thoughtworks Technology radar, the SPA by default approach is under question; meaning, that we should not blindly accept the complexity of SPAs and their frameworks by default even when the business needs don't justify it. In this article, we provided an introduction to Kotlin multiplatform with Kotlin/JS which brings new things to the table. Taking into consideration the latest additions in the ecosystem - namely Kotlin Wasm and Compose Multiplatform - it becomes evident that these advancements offer not only a fresh perspective but also robust solutions for streamlined development.
The appearance of simple and cheap single-board computers (SBC) was a great promoting factor for the IoT world, providing a possibility to develop a wide range of control systems and devices for industrial, domestic, medical, and other usage. Now, everybody can develop stuff they need for their own needs, contribute to the development of public projects, and use products developed by others. In this article, we are going to develop a control system to manage basic garden activities, like watering, illumination, etc. To make our application more flexible and expandable, we will develop it as a layered distributed system of loosely coupled components, communicating with each other via a standard (REST in our case) protocol. We will use well-known enterprise technologies, Node.js and React.js, and a Raspberry Pi Zero device for the sensor layer of our application. The main function of our sensor layer component is to control devices performing main activities in our garden. Let’s suppose we need to control a watering circuit and a lighting circuit that is, to switch on/off a watering pump and an outdoor lamp. Firstly, we connect all the hardware units and then develop the necessary software to bring the hardware stuff to life. Hardware Set-Up: Relay Board Connection, Circuit Assembling For our case of two devices, we can use Raspberry Pi Zero W SBC and SB Components Zero Relay relay HAT (‘Hardware Attached on Top’). Each of the HAT relays has NC (normally closed) and NO (normally open) contacts. We need our watering and lighting circuits to close (switch on) only when we need to switch on the pump and the lamp, so we connect the circuit ends to the NO and COM contacts, as shown in Fig. 1.1. Fig. 1.1. Relay connection diagram Software Set-Up: OS and Library Installation, the Control Software Development Provide all hardware components connected; we can add software to make the system work. First of all, we need to install an operation system on our Raspberry Pi device. There are several ways to do that; probably the most comfortable way is to use Raspberry Pi Imager. With the usage of this application, we can download an appropriate OS and write it on an SD card to boot the SBC; We can use Raspberry Pi OS (32-bit) from the installation menu. Provided your Raspberry Pi is equipped with an appropriate OS and has access to the command line there, we can prepare all necessary software. Our component has two tasks: expose an API to accept commands for controlling the relays and pass these commands to the relays. Let’s start with the API implementation, which is a common task for most modern enterprise applications. Implementation of the Control API As we discussed earlier, we are going to use REST protocol for communication between our components, so we need to expose REST endpoints for the controlling interface. Considering the somehow restricted profile of our Raspberry Pi Zero computer, we should implement the API stuff as lightweight as possible. One of the suitable technologies for this case is the Node.js framework. Essentially, it is a JavaScript engine, which provides the possibility to run JavaScript code on the server-side. Because of its design, Node.js is particularly useful for building web applications that handle requests over the internet and provide processed data and views in return. We are going to use the Express Web framework in our Node.js application to facilitate the request handling. Provided having Node.js running in our system, we can start implementing the control API. Let’s create a Web controller, which will be the main controlling unit of our component. We can implement three endpoints for each device relay – switch-on, switch-off, and status endpoints. It is a good idea to create a Node package for our application, so we create a new directory, "smartgarden“ in the Raspberry root and run the following commands inside the directory, to install all necessary dependencies and create the package descriptor: pi@raspberrypiZero:~/smartgarden $ npm install express --save pi@raspberrypiZero:~/smartgarden $ npm install http --save We begin with the following basic script and will gradually add all necessary functionality. Listing 2.1. smartgarden/outdoorController.js: the component Web API JavaScript const express = require("express"); const http = require("http"); var app = express(); app.use((req, res, next) => { res.append('Access-Control-Allow-Origin', ['*']); res.append('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE'); res.append('Access-Control-Allow-Headers', 'Content-Type'); next(); }); var server = new http.Server(app); app.get("/water/on", (req, res) => { console.log("Watering switched on"); }); app.get("/water/off", (req, res) => { console.log("Watering switched off"); }); app.get("/water/status", (req, res) => { console.log("TODO return the watering relay status"); }); app.get("/light/on", (req, res) => { console.log("Light switched on"); }); app.get("/light/off", (req, res) => { console.log("Light switched off"); }); app.get("/light/status", (req, res) => { console.log("TODO return the light relay status"); }); server.listen(3030, () => {console.log("Outdoor controller listens to port 3030")}); We can run the application with the following command: pi@raspberrypiZero:~ $ node smartgarden/outdoorController.js There should be the following message in the terminal: Outdoor controller listens to port 3030 When navigating to the defined endpoints, we should see the corresponding output, for example, “Light switched off” for the /light/off endpoint. If that’s the case, that means that our control API is working! It’s great, but actually, it doesn’t do any useful work for now. Let’s fix it by adding stuff to pass the commands to the physical devices, i.e., the relays. Passing the Commands to the Physical Devices There are several ways to communicate with physical devices from inside a JavaScript application. Here, we are going to use the Johnny-Five JavaScript library for it. Johnny-Five is the JavaScript Robotics and IoT platform, which is adapted for many platforms, including Raspberry Pi. It provides support for various equipment like relays, sensors, servos, etc. Provided having Node and npm tools installed in your Raspberry Pi, you can install Johnny-Five library with the following command: pi@raspberrypiZero:~/smartgarden $ npm install johnny-five --save Also, we need to install raspi-io package. Raspi IO is an I/O plugin for the Johnny-Five Node.js robotics platform that enables Johnny-Five to control the hardware on a Raspberry Pi. pi@raspberrypiZero:~/smartgarden $ npm install raspi-io --save To test the installation, we can run this script: JavaScript const Raspi = require('raspi-io').RaspiIO; const five = require('johnny-five'); const board = new five.Board({ io: new Raspi() }); board.on('ready', () => { // Create an Led on pin 7 (GPIO4) on P1 and strobe it on/off // Optionally set the speed; defaults to 100ms (new five.Led('P1-7')).strobe(); }); Because of the conservative permissions for interacting with GPIO in Raspbian, you would need to execute this script using sudo. If our installation is successful, we should see the LED blinking with a default frequency of 100 ms. Now we can add the Johnny-Five support for our controller, as it is shown in listing 2.2. Listing 2.2. smartgarden/outdoorController.js with the relay control enabled JavaScript const express = require("express"); const http = require("http"); const five = require("johnny-five"); const { RaspiIO } = require('raspi-io'); var app = express(); app.use((req, res, next) => { res.append('Access-Control-Allow-Origin', ['*']); res.append('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE'); res.append('Access-Control-Allow-Headers', 'Content-Type'); next(); }); var server = new http.Server(app); const board = new five.Board({io: new RaspiIO(), repl: false}); board.on("ready", function() { var relay1 = new five.Relay({pin:"GPIO22",type: "NO"}); var relay2 = new five.Relay({pin:"GPIO5",type: "NO"}); app.get("/water/:action", (req, res) => { switch(req.params.action) { case 'on': relay1.close(); res.send(relay1.value.toString()); break; case 'off': relay1.open(); res.send(relay1.value.toString()); break; case 'status': res.send(relay1.value.toString()); break; default: console.log('Unknown command: ' + req.params.action); res.sendStatus(400); } }); app.get("/light/:action", (req, res) => { switch(req.params.action) { case 'on': relay2.close(); res.send(relay2.value.toString()); break; case 'off': relay2.open(); res.send(relay2.value.toString()); break; case 'status': res.send(relay2.value.toString()); break; default: console.log('Unknown command: ' + req.params.action); res.sendStatus(400); } }); server.listen(3030, () => { console.log('Outdoor controller listens to port 3030'); }); }); We run the updated application as follows: pi@raspberrypiZero:~ $ sudo node smartgarden/outdoorController.js If we navigate to the endpoints now, we should see the corresponding relays switch on and switch-off. If this is the case, means that our controller component works properly! Well done! However, it is not very convenient to send the command by typing REST requests in a http client. It would be good to provide a GUI for it. Development of the Command User Interface There are various options for implementing the control interface, and we are going to use React.js for this task. React.js is a lightweight JavaScript framework, which is oriented toward the creation of component-based web UIs, which is the right choice for our case of a distributed component application. To use React in whole its scope, we can install the create-react-app tool: npm install -g create-react-app After that, we can create a JavaScript application for our front-end stuff. Provide being in a project root directory, we run the following command: .../project-root>create-react-app smartgarden This command creates a new folder ('smartgarden') with a ready-to-run prototype React application.Now, we can enter the directory and run the application as follows: .../project-root>cd smartgarden .../project-root/smartgarden>npm start This starts the application in a new browser at http://localhost:3000. It is a trivial but completely functional front-end application, which we can use as a prototype for creating our UI. React supports component hierarchies, where each component can have a state, and the state can be shared between related components. Also, each component's behavior can be customized by passing properties to it. So, we can develop the main component, which works as the placeholder for displaying screens or forms for the corresponding actions. Also, to accelerate the development and get our UI looking in a commonly-used, user-friendly way, we are going to use the MUI component library, which is one of the most popular React component libraries. We can install the library with the following command: npm install @mui/material @emotion/react @emotion/styled To put all configuration settings in one place, we create a configuration class and put it to a particular configuration directory: Listing 2.3. configuration.js: Configuration class, including all the application settings. JavaScript class Configuration { WATERING_ON_PATH = "/water/on"; WATERING_OFF_PATH = "/water/off"; WATERING_STATUS_PATH = "/water/status"; LIGHT_ON_PATH = "/light/on"; LIGHT_OFF_PATH = "/light/off"; LIGHT_STATUS_PATH = "/light/status"; CONTROLLER_URL = process.env.REACT_APP_CONTROLLER_URL ? process.env.REACT_APP_CONTROLLER_URL : window.CONTROLLER_URL ? window.CONTROLLER_URL : "http://localhost:3030"; } export default Configuration; The configuration class contains the controller server URL and paths to the control endpoints. For the controller URL, we provide two possibilities of external configuration and a default value, http://localhost:3030 in our case. You can substitute it for the corresponding URL of your controller server. It is a good idea to put all related functionalities in one place. Putting our functionality behind a service, which exposes certain APIs, ensures more flexibility and testability for our application. So, we create a control service class, which implements all basic operations for data exchange with the controller server and exposes these operations as methods for all React components. To make our UI more responsive, we implement the methods as asynchronous. Provided the API is unchanged, we can change the implementation freely and none of the consumers will be affected. Our service can look like this. Listing 2.4. services/ControlService.js – API for communication with the sensor layer JavaScript import Configuration from './configuration'; class ControlService { constructor() { this.config = new Configuration(); } async switchWatering(switchOn) { console.log("ControlService.switchWatering():"); let actionUrl = this.config.CONTROLLER_URL + (switchOn ? this.config.WATERING_ON_PATH : this.config.WATERING_OFF_PATH); return fetch(actionUrl ,{ method: "GET", mode: "cors" }) .then(response => { if (!response.ok) { this.handleResponseError(response); } return response.text(); }).then(result => { return result; }).catch(error => { this.handleError(error); }); } async switchLight(switchOn) { console.log("ControlService.switchLight():"); let actionUrl = this.config.CONTROLLER_URL + (switchOn ? this.config.LIGHT_ON_PATH : this.config.LIGHT_OFF_PATH); return fetch(actionUrl ,{ method: "GET", mode: "cors" }) .then(response => { if (!response.ok) { this.handleResponseError(response); } return response.text(); }).then(result => { return result; }).catch(error => { this.handleError(error); }); } handleResponseError(response) { throw new Error("HTTP error, status = " + response.status); } handleError(error) { console.log(error.message); } } export default ControlService; To encapsulate the control functionality, we create the control panel component, as it is shown in listing 2.5. To keep our code structured, we put the component into “components” folder. Listing 2.5. components/ControlPanel.js: React component containing UI elements to send commands for controlling the garden devices. JavaScript import React, { useEffect, useState } from 'react'; import Switch from '@mui/material/Switch'; import ControlService from '../ControlService'; import Configuration from '../configuration'; function ControlPanel() { const controlService = new ControlService(); const [checked1, setChecked1] = useState(false); const [checked2, setChecked2] = useState(false); useEffect(() => { const config = new Configuration(); const fetchData = async () => { try { let response = await fetch(config.CONTROLLER_URL + config.WATERING_STATUS_PATH); const isWateringActive = await response.text(); setChecked1(isWateringActive === "1" ? true : false); response = await fetch(config.CONTROLLER_URL + config.LIGHT_STATUS_PATH); const isLightActive = await response.text(); setChecked2(isLightActive === "1" ? true : false); } catch (error) { console.log("error", error); } }; fetchData(); }, []); const handleWatering = (event) => { controlService.switchWatering(event.target.checked).then(result => { setChecked1(result === "1" ? true : false); }); }; const handleLight = (event) => { controlService.switchLight(event.target.checked).then(result => { setChecked2(result === "1" ? true : false); }); }; return( <React.Fragment> <div> <label htmlFor='device1'> <span>Watering</span> <Switch id="1" name="device1" checked={checked1} onChange={handleWatering} /> </label> <label htmlFor='device2'> <span>Light</span> <Switch id="2" name="device2" checked={checked2} onChange={handleLight}/> </label> </div> </React.Fragment> ); } export default ControlPanel; Using the stuff generated by the create-react-app tool, we can change the content of app.js as follows. Listing 2.6. The base UI component JavaScript import './App.css'; import React from 'react'; import AppBar from '@mui/material/AppBar'; import Toolbar from '@mui/material/Toolbar'; import Typography from '@mui/material/Typography'; import ControlPanel from './components/ControlPanel'; import { createTheme, ThemeProvider } from '@mui/material/styles'; function App() { const theme = createTheme( { palette: { primary: { main: '#1b5e20', }, secondary: { main: '#689f38', }, }, }); return ( <div className="App"> <ThemeProvider theme={theme}> <AppBar position="static"> <Toolbar> <Typography variant="h5">Smart Garden</Typography> </Toolbar> </AppBar> <ControlPanel/> </ThemeProvider> </div> ); } export default App; Now, it is time to test our UI application. Probably, you would have to set the CONTROLLER_URL configuration parameter to the IP address of the Raspberry Pi device where the outdoor controller back-end application is running, something like “http://192.168.nn.nn:3030”. After starting the application, it opens in a new browser tab. Fig. 2.1. Control UI front-end application If our outdoor controller application runs in a Raspberry Pi device connected to the control circuit (see Fig. 1.1), now we can switch on and off the watering and lighting circuits. We should see a screen similar to that shown in Figure 2.1. If this is the case, you have done it! Congratulations! Now, you have a working remote control system, which can be extended for various devices and equipment. Also, we can integrate the sensor layer controller package into any system and communicate with its REST endpoints from UI and business logic components. The article's source code is available on GitHub.
As React Native applications evolve, the need for efficient state management becomes increasingly evident. While Async Storage serves its purpose for local data persistence, transitioning to the Context API with TypeScript brings forth a more organized and scalable approach. This comprehensive guide will walk you through the migration process step by step, leveraging the power of TypeScript. Understanding Async Storage and Context API Async Storage in React Native offers asynchronous, persistent storage for key-value data on the device. As the application scales, managing the state solely through Async Storage might become cumbersome. The Context API, in conjunction with TypeScript, provides a structured means of sharing state across components without prop drilling. It ensures type safety and enhances development efficiency. Why Replace Async Storage With Context API in Typescript? Type safety: TypeScript's strong typing system ensures better code integrity and reduces potential runtime errors. Scalability and maintainability: Context API simplifies state management and promotes scalability by facilitating a more organized codebase. Enhanced development experience: TypeScript's static typing aids in catching errors during development, leading to more robust and maintainable code. Step-By-Step Replacement Process 1. Identify Async Storage Usage Review the codebase to locate sections using Async Storage for reading or writing data. 2. Create a Context With TypeScript TypeScript typescript Copy code import React, { createContext, useContext, useReducer, Dispatch } from 'react'; interface AppState { // Define your application state interface here exampleData: string; } interface AppAction { // Define action types and payload structure here type: string; payload?: any; } const initialState: AppState = { exampleData: '', }; const AppContext = createContext<{ state: AppState; dispatch: Dispatch<AppAction>; }>({ state: initialState, dispatch: () => null, }); const appReducer = (state: AppState, action: AppAction): AppState => { // Implement your reducer logic here based on action types switch (action.type) { case 'UPDATE_DATA': return { ...state, exampleData: action.payload, }; // Add other cases as needed default: return state; } }; const AppProvider: React.FC = ({ children }) => { const [state, dispatch] = useReducer(appReducer, initialState); return ( <AppContext.Provider value={{ state, dispatch }> {children} </AppContext.Provider> ); }; const useAppContext = () => { return useContext(AppContext); }; export { AppProvider, useAppContext }; 3. Refactor Components To Use Context Update components to consume data from the newly created context: TypeScript import React from 'react'; import { useAppContext } from './AppContext'; const ExampleComponent: React.FC = () => { const { state, dispatch } = useAppContext(); const updateData = () => { const newData = 'Updated Data'; dispatch({ type: 'UPDATE_DATA', payload: newData }); }; return ( <div> <p>{state.exampleData}</p> <button onClick={updateData}>Update Data</button> </div> ); }; export default ExampleComponent; 4. Implement Context Provider Wrap your application's root component with the AppProvider: TypeScript import React from 'react'; import { AppProvider } from './AppContext'; import ExampleComponent from './ExampleComponent'; const App: React.FC = () => { return ( <AppProvider> <ExampleComponent /> {/* Other components using the context */} </AppProvider> ); }; export default App; 5. Test and Debug Thoroughly test the application to ensure proper functionality and handle any encountered issues during the migration process.
Infrastructure as Code (IaC) has become a key aspect of modern cloud computing. It ensures quick, consistent, and repeatable infrastructure deployment. In this context, AWS Cloud Development Kit (CDK) stands out by enabling developers to define cloud infrastructure using familiar programming languages. This post will deeply dive into advanced techniques for using the AWS CDK with TypeScript and Python, two of the most popular programming languages. Understanding AWS CDK What Is AWS CDK? The AWS Cloud Development Kit (CDK) is an open-source software development framework to model and provision your cloud application resources using familiar programming languages. Provisions of cloud applications can be done through AWS CDK in languages familiar to the developer, like TypeScript and Python, extending the flexibility and functionality that may not be present in a simple JSON/YAML-based CloudFormation. Why Use AWS CDK? AWS CDK simplifies setting up AWS resources, allowing for intricate configurations and the automation of setup tasks. Here’s why developers choose AWS CDK: Familiarity: Developers can use the language they are most comfortable with. Readable code: Infrastructure setup can be read and understood just like any other code developers or IT professionals are working with. Reusable components: Common service configurations can be bundled into reusable constructs, eliminating the need to recreate basic configurations. Advanced Techniques With TypeScript and Python Setting up AWS CDK for TypeScript and Python Before leveraging AWS CDK with TypeScript or Python, you must install Node.js, NPM, and the AWS CDK Toolkit. Setting up an AWS CDK project for TypeScript and Python involves creating a new directory for your CDK app, initializing the CDK project, and selecting the desired language. Utilizing Constructs in AWS CDK Constructs are the basic building blocks of AWS CDK apps. A construct represents a "cloud component" and encapsulates everything AWS CloudFormation needs to create the component. Creating custom constructs: With AWS CDK, you can create custom constructs to define your cloud components that can be reused across different projects. Using existing constructs: Leverage the rich libraries of prebuilt constructs provided by AWS Construct Library, containing constructs for most AWS services. Aspects in AWS CDK Aspects are another powerful feature in AWS CDK. They allow you to apply operations to all constructs within a scope, which helps apply tags, enforce standards, or manage batch operations. Composing Stacks With Stage Stage in AWS CDK allows you to organize your stacks into logical groups. For instance, you might have different stages for development, staging, and production environments, each with its own set of AWS resources. Integrating With AWS Lambda AWS Lambda integration is straightforward with the AWS CDK, especially for Python and TypeScript developers. You can define a lambda function inline or specify the path to the source code. Managing Different Environments Handling multiple environments (prod, dev, etc.) is streamlined with AWS CDK. By defining environmental contexts, you can easily manage each environment's resources, permissions, and configurations separately. Deploying a Rest API With Amazon API Gateway Creating a RESTful API with AWS CDK is simplified, involving creating a new RestApi construct and defining your API structure and integrations. Error Handling and Debugging AWS CDK apps are just like any other code and can also have bugs. Learn to use AWS CDK Toolkit commands like CDK diff and CDK synth to catch and resolve errors before deployment. Testing Constructs Testing is critical to software development, and AWS CDK is no exception. Use tools and practices to write unit and integration tests for your constructs to ensure they work as expected. CI/CD With AWS CDK Integrate AWS CDK into your CI/CD pipeline to deploy your stacks automatically. AWS provides services like AWS CodePipeline and AWS CodeBuild, but you can also integrate with other popular tools like Jenkins or Travis CI. Best Practices for Using AWS CDK With TypeScript and Python Keeping Your CDK Version Updated Ensure you are using the latest version of the AWS CDK. New versions bring bug fixes, new features, and security enhancements. Parameterizing Resources Make your constructs and stacks reusable and configurable across different environments by using parameters for your resources. Use the IAM Least Privilege Principle When defining IAM policies, follow the principle of least privilege. Grant only necessary permissions to reduce the risk of unauthorized access. Handling Secrets Never hardcode secrets in your AWS CDK code. Use AWS Secrets Manager or AWS Systems Manager Parameter Store to handle secrets. Code Reviews and Documentation Perform code reviews and maintain good documentation. This practice is particularly important for IaC, directly impacting your application's infrastructure. FAQs 1. How Does AWS CDK Differ From AWS Cloudformation? AWS CloudFormation is an IaC service that uses JSON or YAML to create and manage AWS resources. On the other hand, AWS CDK is a software development framework that allows you to define cloud infrastructure in code using familiar programming languages like TypeScript and Python. AWS CDK synthesizes the code written in these languages into a CloudFormation template. This approach offers the robustness of CloudFormation while simplifying and accelerating IaC processes. 2. Is Defining Resources Not Supported by the AWS CDK Possible? AWS CDK allows you to define AWS resources even if they don't have corresponding high-level constructs. The AWS CDK includes a set of low-level constructs called the CloudFormation Resource Classes (Cfn*), one for each resource type defined in the AWS CloudFormation Resource Reference. You can use these classes to define any AWS resource. 3. How Do I Manage the State With AWS CDK? AWS CDK applications, after synthesis, delegate the state management to AWS CloudFormation, which maintains the state of each stack it manages. This includes keeping track of all resources in a particular stack and the parameters used to configure them. However, AWS CDK also maintains an internal state, mainly to track assets (like Lambda code), and this state is stored in a "CDK.out" directory, which should be included in your version control system. Conclusion Embracing AWS CDK for infrastructure deployment provides a robust, predictable, and repeatable deployment process. The ability to use familiar programming languages like TypeScript and Python to define cloud infrastructure is a game-changer for many developers. By following best practices and leveraging advanced features, you can manage complex infrastructures efficiently and effectively while keeping your codebase clean and understandable. The AWS CDK is still evolving, and staying updated with the latest developments is key to making the most of this powerful tool.
In the dynamic landscape of communication and collaboration, Slack has emerged as a powerful platform for teams to connect and work seamlessly. The integration of GPT (Generative Pre-trained Transformer) with Slack, powered by React, takes this collaboration to new heights. This fusion of advanced language models and a robust communication platform opens up a realm of possibilities for enhanced productivity, creativity, and engagement. Understanding GPT Before delving into the intricacies of GPT Slack React integration, let's grasp the fundamentals of GPT. Developed by OpenAI, GPT is a state-of-the-art language model that utilizes deep learning to generate human-like text based on the input it receives. GPT is pre-trained on vast datasets, making it adept at understanding context, generating coherent responses, and even completing text prompts with remarkable accuracy. The Rise of Slack in Collaboration Slack has become a cornerstone in modern collaboration, offering a centralized space for teams to communicate, share files, and coordinate tasks. Its user-friendly interface and extensive integrations make it a preferred choice for organizations of all sizes. However, integrating GPT into Slack introduces a transformative element, amplifying the platform's capabilities and opening up new horizons for team interactions. GPT Slack React: A Synergistic Blend The integration of GPT with Slack, coupled with the power of React, creates a synergistic blend that enriches the user experience. React, a JavaScript library for building user interfaces provides a seamless way to integrate GPT capabilities into the Slack environment. This integration enhances communication, automates tasks, and facilitates a more interactive and engaging collaboration experience. Enhancing Communication With GPT-Powered Chatbots One of the key advantages of GPT Slack React integration is the ability to deploy intelligent chatbots within the Slack workspace. These chatbots, powered by GPT, can understand natural language queries, provide relevant information, and even engage in meaningful conversations. This not only streamlines communication but also frees up valuable time for team members by automating routine queries and tasks. Automating Tasks for Increased Productivity Imagine a Slack workspace where repetitive tasks are automated with the help of GPT-powered bots. React integration enables the creation of interactive interfaces within Slack channels, allowing users to trigger actions and receive real-time updates. From scheduling meetings to retrieving information from external databases, the possibilities for task automation are limitless, contributing to increased overall productivity. Facilitating Knowledge Sharing and Onboarding GPT Slack React integration proves invaluable in knowledge-sharing and onboarding processes. By leveraging GPT's natural language processing capabilities, teams can create interactive guides, FAQs, and onboarding materials directly within Slack. This ensures that new team members have easy access to information, and existing members can quickly retrieve relevant data without leaving the Slack environment. Enhancing Creativity With GPT-Powered Content Generation Beyond routine tasks, GPT Slack React integration opens up avenues for creative collaboration. The language generation capabilities of GPT can be harnessed to assist in brainstorming sessions, content creation, and idea generation. Whether it's drafting documents, generating code snippets, or composing marketing copy, GPT's contribution enhances the creative aspect of teamwork. Overcoming Challenges: Privacy and Ethical Considerations The integration of GPT into Slack, while promising significant benefits, also introduces challenges related to privacy and ethical considerations. Addressing these challenges is crucial to ensure the responsible and secure use of the technology within the collaborative workspace. Data Security and Privacy Concerns: As GPT processes language data, organizations must prioritize data security to prevent inadvertent exposure of sensitive information. It's essential to implement robust encryption mechanisms and access controls to safeguard confidential communications within Slack. Privacy policies should be transparent, clearly outlining how data is processed, stored, and used, instilling confidence among users about the security of their information. Ethical Use of Language Models: GPT's language generation capabilities raise ethical concerns regarding the potential misuse of the technology. Organizations integrating GPT into Slack should establish ethical guidelines governing the use of language models. This includes ensuring that generated content adheres to ethical standards, avoids biases, and doesn't contribute to misinformation. Regular audits and reviews of GPT-generated content can help maintain ethical standards within the collaborative environment. User Consent and Control: Users should have control over their interactions with GPT-powered features. Providing clear information about how GPT is used within Slack and obtaining explicit user consent ensures transparency. Additionally, incorporating features that allow users to customize their interactions with GPT, such as opting out of certain functionalities, empowers individuals to manage their experience in alignment with their privacy preferences. Monitoring and Compliance: Regular monitoring and compliance checks are essential to identify and rectify any potential privacy or ethical violations. Organizations should establish processes for ongoing monitoring of GPT-generated content, ensuring that it aligns with company policies and legal regulations. Continuous compliance audits help organizations adapt to evolving privacy standards and maintain a responsible and trustworthy collaboration environment. Future Prospects: Advancements in GPT and Slack Integration The integration of GPT into Slack is a dynamic field, and the future holds exciting prospects for advancements in technology, collaboration, and user experience. Several areas are likely to see significant developments as GPT and Slack integration evolves. Enhanced Natural Language Understanding: Future iterations of GPT are expected to exhibit improved natural language understanding. This advancement will enable more contextually aware interactions within Slack, making chatbots and language models even more adept at comprehending nuanced queries and providing accurate, relevant responses. This enhanced understanding will contribute to a more seamless and human-like communication experience within the collaborative platform. Smarter Chatbots and Task Automation: As GPT models become more sophisticated, the capabilities of chatbots integrated into Slack will evolve. Smarter chatbots will be able to handle complex tasks, understand user preferences more intuitively, and contribute to even greater levels of task automation. This could include not only routine tasks but also more intricate problem-solving and decision-making processes, further enhancing team productivity. Integration with External Services: Future developments may see tighter integration between GPT-powered Slack and external services. This could involve more seamless connectivity with databases, project management tools, and other third-party applications. The ability to pull in real-time data and perform actions across multiple platforms directly within Slack channels will streamline workflows and contribute to a more cohesive and efficient collaborative experience. Customization and Personalization: Advancements in GPT Slack React integration may pave the way for greater customization and personalization options. Users could have more control over how GPT-powered features operate within their workspace, tailoring interactions to suit their specific needs and preferences. This level of customization not only enhances user experience but also ensures that GPT integration aligns closely with the unique requirements of different teams and organizations. Ethical AI Development and Bias Mitigation: The future of GPT Slack React integration will likely place a strong emphasis on ethical AI development. Efforts to mitigate biases in language models and promote fairness in interactions will be paramount. Ongoing research and development will focus on refining algorithms to ensure that GPT-powered features within Slack are inclusive, unbiased, and respectful of diverse perspectives and user demographics. Collaborative Innovation: The collaborative nature of Slack combined with the intelligence of GPT opens the door to innovative forms of collaboration. Future developments may see the emergence of new features that facilitate collective problem-solving, creative brainstorming sessions, and collaborative content creation. The fusion of human ingenuity and machine intelligence within the Slack environment could lead to entirely new ways of working and ideating. Conclusion In conclusion, the fusion of Slack ChatGPT integration, facilitated by React integration, marks a significant leap in the evolution of collaborative tools. From enhancing communication with intelligent chatbots to automating tasks for increased productivity, the possibilities are vast. As organizations embrace this transformative integration, they pave the way for a future where human-machine collaboration becomes an integral part of everyday work life. The journey towards unlocking the full potential of GPT Slack React integration has just begun, and the road ahead holds promise for even more exciting innovations in the realm of team collaboration.
Uploading massive datasets to Amazon S3 can be daunting, especially when dealing with gigabytes of information. However, a solution exists within reach. We can revolutionize this process by harnessing the streaming capabilities of a Node.js TypeScript application. Streaming enables us to transfer substantial data to AWS S3 with remarkable efficiency, all while conserving memory resources and ensuring scalability. In this article, we embark on a journey to unveil the secrets of developing a Node.js TypeScript application that seamlessly uploads gigabytes of data to AWS S3 using the magic of streaming. Setting up the Node.js Application Let's start by setting up a new Node.js project: Shell mkdir aws-s3-upload cd aws-s3-upload npm init -y Next, install the necessary dependencies: Shell npm install aws-sdk axios npm install --save-dev @types/aws-sdk @types/axios typescript ts-node npm install --save-dev @types/express @types/multer multer multer-s3 Configuring AWS SDK and Multer In this section, we'll configure the AWS SDK to enable communication with Amazon S3. Ensure you have your AWS credentials ready. JavaScript import { S3 } from 'aws-sdk'; import multer from 'multer'; import multerS3 from 'multer-s3'; import { v4 as uuidv4 } from 'uuid'; const app = express(); const port = 3000; const s3 = new S3({ accessKeyId: 'YOUR_AWS_ACCESS_KEY_ID', secretAccessKey: 'YOUR_AWS_SECRET_ACCESS_KEY', region: 'YOUR_AWS_REGION', }); We'll also set up Multer to handle file uploads directly to S3. Define the storage configuration and create an upload middleware instance. JavaScript const upload = multer({ storage: multerS3({ s3, bucket: 'YOUR_S3_BUCKET_NAME', contentType: multerS3.AUTO_CONTENT_TYPE, acl: 'public-read', key: (req, file, cb) => { cb(null, `uploads/${uuidv4()}_${file.originalname}`); }, }), }); Creating the File Upload Endpoint Now, let's create a POST endpoint for handling file uploads: JavaScript app.post('/upload', upload.single('file'), (req, res) => { if (!req.file) { return res.status(400).json({ message: 'No file uploaded' }); } const uploadedFile = req.file; console.log('File uploaded successfully. S3 URL:', uploadedFile.location); res.json({ message: 'File uploaded successfully', url: uploadedFile.location, }); }); Testing the Application To test the application, you can use tools like Postman or cURL. Ensure you set the Content-Type header to multipart/form-data and include a file in the request body with the field name 'file.' Choosing Between Database Storage and Cloud Storage Whether to store files in a database or an S3 bucket depends on your specific use case and requirements. Here's a brief overview: Database Storage Data Integrity: Ideal for ensuring data integrity and consistency between structured data and associated files, thanks to ACID transactions. Security: Provides fine-grained access control mechanisms, including role-based access control. File Size: Suitable for small to medium-sized files in terms of performance and storage cost. Transactional workflows: Useful for applications with complex transactions involving both structured data and files. Backup and recovery: Facilitates inclusion of files in database backup and recovery processes. S3 Bucket Storage Scalability: Perfect for large files and efficient file storage, scaling to gigabytes, terabytes, or petabytes of data. Performance: Optimized for fast file storage and retrieval, especially for large media files or binary data. Cost-efficiency: Cost-effective for large volumes of data compared to databases, with competitive pricing. Simplicity: Offers straightforward file management, versioning, and easy sharing via public or signed URLs. Use cases: Commonly used for storing static assets and content delivery and as a scalable backend for web and mobile file uploads. Durability and availability: Ensures high data durability and availability, suitable for critical data storage. Hybrid Approach: In some cases, metadata and references to files are stored in a database, while the actual files are stored in an S3 bucket, combining the strengths of both approaches. The choice should align with your application's needs, considering factors like file size, volume, performance requirements, data integrity, access control, and budget constraints. Multer vs. Formidable — Choosing the Right File Upload Middleware When building Node.js applications with Express, choosing the suitable file upload middleware is essential. Let's compare two popular options: Multer and Formidable. Multer With Express Express integration: Seamlessly integrates with Express for easy setup and usage. Abstraction layer: Provides a higher-level abstraction for handling file uploads, reducing boilerplate code. Middleware chain: Easily fits into Express middleware chains, enabling selective usage on specific routes or endpoints. File validation: Supports built-in file validation, enhancing security and control over uploaded content. Multiple file uploads: Handles multiple file uploads within a single request efficiently. Documentation and community: Benefits from extensive documentation and an active community. File renaming and storage control: Allows customization of file naming conventions and storage location. Formidable With Express Versatility: Works across various HTTP server environments, not limited to Express, offering flexibility. Streaming: Capable of processing incoming data streams, ideal for handling huge files efficiently. Customization: Provides granular control over the parsing process, supporting custom logic. Minimal dependencies: Keeps your project lightweight with minimal external dependencies. Widely adopted: A well-established library in the Node.js community. Choose Multer and Formidable based on your project's requirements and library familiarity. Multer is excellent for seamless integration with Express, built-in validation, and a straightforward approach. Formidable is preferred when you need more customization, versatility, or streaming capabilities for large files. Conclusion In conclusion, this article has demonstrated how to develop a Node.js TypeScript application for efficiently uploading large data sets to Amazon S3 using streaming. Streaming is a memory-efficient and scalable approach, mainly when dealing with gigabytes of data. Following the steps outlined in this guide can enhance your data upload capabilities and build more robust applications.
For the last several days, I’ve been working on one of my pet projects for my portfolio. To be precise, I have been creating an analytical dashboard for an airline company. Finding suitable tools turned out to be a challenge. I wanted to use Next.js in the stack, and my goal was to make sure any user could understand the presented statistical data and, secondly, interactively explore the information. So, in this tutorial, I will cut my way through constructing the dashboard and creating a pivot table and charts in the Next.js app on the example of an airline company. Hopefully, it will save you time :) Prerequisites Here, I would like to share all the things I did to prepare for creating the essential statistics. And we will start with… Analyzing the Subject Area As a base, we will use a free, accessible dataset on landings of a major airport in the United States — San Francisco International Airport. Firstly, we have to analyze the subject area and understand what insights we are looking for. Let us imagine ourselves as airport executives. We have hundreds of arrivals and departures of different types, scales, airlines, etc. What do we likely wish to learn? The efficiency of our establishment indeed. So, I selected some lead aspects for this field: landing frequency, landing time, aircraft manufacturer models in use, flight geography, landed weight. Final Dashboard As we get through this tutorial, we will gradually create an interactive dashboard and, more importantly, learn to create it with Next.js. Our finished page will contain a pivot table and plenty of different charts. Tools A large number of data requires powerful means to display it. So, I surfed the Internet and finally focused on the most suitable ones for my needs. I created a pivot table with Flexmonster and charts with Highcharts. Luckily, these two libraries are super user-friendly, customizable, fine-integrated with each other, and, more importantly, well-deployed on Next.js. By the way, this article on the best tools for reporting in React was really helpful in making my choice. Now, let’s dive into the integration process. Connecting Highcharts and Flexmonster to the Next.js App It’s time to figure out how to launch these cool shapes and tables on the computer. So: 1. Install Flexmonster CLI Shell npm install -g flexmonster-cli npm install highcharts highcharts-react-official 2. Open your Next.js project or create it by running two lines Shell npx create-next-app flexmonster-project --ts --app cd flexmonster-project 3. Get the Flexmonster wrapper for React: Shell flexmonster add react-flexmonster Got it! Since you install libraries, let’s move further, embedding them into the project. 4. Import Flexmonster to global.css: CSS @import "flexmonster/flexmonster.css"; 5. Create a separate file PivotWrapper.tsx connecting Flexmonster and Highcharts as well. It will be a wrapper for our future pivot table: TypeScript-JSX 'use client' import * as React from 'react'; import * as FlexmonsterReact from "react-flexmonster"; import Flexmonster from 'flexmonster'; import "flexmonster/lib/flexmonster.highcharts.js"; // A custom type so we can pass a reference along with other Flexmonster params type PivotProps = Flexmonster.Params & { pivotRef?: React.ForwardedRef<FlexmonsterReact.Pivot>; } // The pivotRef can be used to get a reference to the Flexmonster instance so you can access Flexmonster API. const PivotWrapper: React.FC<PivotProps> = ({ pivotRef, ...params}) => { return ( <FlexmonsterReact.Pivot {...params} ref={pivotRef} /> ) } export default PivotWrapper; 6. Import the wrapper into your page. You can name it in your way, but I did it in pivot-table-demo/page.tsx: TypeScript-JSX "use client" import * as React from "react"; import type { Pivot } from "react-flexmonster"; import dynamic from "next/dynamic"; import * as Highcharts from 'highcharts'; import HighchartsReact from 'highcharts-react-official'; // Wrapper must be imported dynamically so that Flexmonster is loaded only when the page is rendered on the client side. Learn more about dynamic imports in Next.js. const PivotWrap = dynamic(() => import('@/app/PivotWrapper'), { ssr: false, loading: () => <h1>Loading Flexmonster...</h1> }); const ForwardRefPivot = React.forwardRef<Pivot, Flexmonster.Params>((props, ref?: React.ForwardedRef<Pivot>) => <PivotWrap {...props} pivotRef={ref} /> ) ForwardRefPivot.displayName = 'ForwardRefPivot'; 7. Insert PivotWrapper component and Highcharts into your page as shown below: TypeScript-JSX export default function WithHighcharts() { const pivotRef: React.RefObject<Pivot> = React.useRef<Pivot>(null); const reportComplete = () => { pivotRef.current!.flexmonster.off("reportComplete", reportComplete); //creating charts after Flexmonster instance is launched createChart(); } const createChart = () => { // we will define charts here later } return ( <div className="App"> <div id="pivot-container" className=""> <ForwardRefPivot ref={pivotRef} toolbar={true} beforetoolbarcreated={toolbar => { toolbar.showShareReportTab = true; } shareReportConnection={{ url: "https://olap.flexmonster.com:9500" } width="100%" height={600} report = {{ dataSource: { type: "csv", // path to the dataset of San Francisco Airport Landings filename: "https://query.data.world/s/vvjzn4x5anbdunavdn6lpu6tp2sq3m?dws=00000" } } reportcomplete={reportComplete} // insert your licenseKey below licenseKey="XXXX-XXXX-XXXX-XXXX-XXXX" /> </div> // we will insert charts below later ) } 9. Build and run the app: Shell npm run build npm start You can explore more about the integration of Flexmonster with Next.js and Highcharts in Flexmonster documentation. The pivot table is ready; the charts are in line! Setting Chart Configuration Up In this tutorial, I use pie, bar, column, areaspline, and scatter charts. It sounds like a lot, but Highcharts has even more chart types to propose. It is also possible to customize them to your liking. Further, I will tell you the overall chart-defining process for any chart and emphasize some important points. 1. Insert Highcharts into your page Now, move down to the return() section. Here, you describe your page layout. You will insert Highcharts into <div> blocks. So, create the first one <div> block and enter its `id`. For example, my chart tells about landing frequency by aircraft type. So, I name it as ‘chart-frequency-aircraft’: TypeScript-JSX <div> <p>By Aircraft Type</p> <div className="chart" id="chart-frequency-aircraft"></div> </div> 2. Define the chart options Chart configuration must be initialized inside the createChart() function. Highcharts is accessed by Flexmonster PivotWrapper, as it dynamically receives data from the component. You have created the chart container; now describe the diagram’s appearance. Here are my defined options for the second pie chart in this tutorial: TypeScript-JSX //Running Flexmonster's getData method to provide Highcharts with aggregated data pivotRef.current!.flexmonster.highcharts?.getData( { type: 'pie', // see the list of types on Highcharts documentation: https://www.highcharts.com/docs/chart-and-series-types/chart-types) slice: { rows: [ { // Here you type the name of the row in dataset uniqueName: 'Landing Aircraft Type', }, ], measures: [ { // Here you type the name of the row in dataset uniqueName: 'Landing Count', // You can use aggregation functions as well aggregation: 'count', }, ], }, }, (data: any) => { // Replace ‘chart-frequency-aircraft’ with necessary <div> id for other charts Highcharts.chart('chart-frequency-aircraft', data); }, (data: any) => { Highcharts.chart('chart-frequency-aircraft', data); } ); If you want to customize your chart, you can do it by accessing data properties inside (data: any) => {} function. For example, I created an inner hole inside the pie chart to create a custom donut chart by adding these lines: TypeScript-JSX (data: any) => { data.plotOptions = { pie: { innerSize: '50%', dataLabels: { enabled: true, format: '<b>{point.name}</b>: {point.percentage:.1f} %', }, }, } Highcharts.chart('chart-frequency-aircraft', data); }, Be careful! You should change data before it passes to the Highcharts.chart() function as a parameter. Other charts can be created similarly. You can see more examples on the Highcharts demo page. You can also read more about different properties in its official API documentation. In the following sections, we’ll go deeper into analytics and chart choice and share some insights. Running Through the Dashboard We are already familiar with the subject area, so let’s explore the dashboard deeper, considering the lead aspects of the airport workflow. Landing Frequency The partnership between airports and airlines is quite important as it affects the income of both. Understanding leading airlines can help servicing companies establish business relations. An airport is able to manage landing fees, its own policy, and so on. If you want to explore airport-airline relations more deeply, welcome to the nice article here. To analyze the flight frequency by airlines in the airport, I chose a bar chart as it contains a lot of members to show, and I can conveniently compare the quantitative difference. Besides the apparent domination of United Airlines, we can see that Emirates (Dubai) and Servisair (United Kingdom) are among the leaders in landing count. When analyzing the flight frequency by aircraft type, I preferred the donut chart since this chart best reflects the percentage relationship. Plus, we have only three members, so with the donut chart, it’s much easier to perceive how much passenger flights dominate over freight transportation. Hmm, what about more detailed analytics? Here is a shining moment of Flexmonster Pivot. Let’s quickly set it up and configure the desired report! And now, we are able to see landing activity by airlines chronologically. I will clarify my purpose. Here, I highlighted the landing count in green color when an airline served 1 million flights or more. If it did less than 600k, the cell is yellow and red if it did less than 300k. The value is acceptable if it ranges between 600k and 1 million. From here, I can notice the greatest and lowest airline activity periods. I did it using the conditional formatting feature built into the Flexmonster pivot table. Any user can add personal conditions, format text inside the cells, and set their own data appearance right from the UI. Landing Time This analytics discovers relations between landings count and either year shift or month shift. First, we see how the landings count changes from year to year. By selecting different types of aircraft bodies on the column chart, we can see the dynamic of landings and evaluate each value by year. Zooming in and out, we can see that passenger transportation is steadily growing while cargo and combined transport are losing their relevance. How does jet activity change throughout each year? Another way to display value shifts throughout a year is to use areaspline chart. I can observe how different periods take their share in the total landing count. Obviously, the activity peak is in February and March. Aircraft Manufacturer Models in Use An airport should be familiar with the specifics of the vehicles it works with as they can need some service while waiting in the terminal. The following chart displays the distribution of aircraft manufacturers in San Francisco airport operations. Since I selected the pie chart, it’s clear to notice Boeing’s and Airbus’s dominance compared to others. Here are some smaller producers, like Bombardier, Embraer, and McDonell Douglas. I also analyzed these statistics dynamically in Flexmonster Pivot. For this purpose, I selected some related fields in the pivot table. Here, I can sort alphabetically A-Z. In addition, I switched to “Charts” mode and smoothly inspected manufacturer distribution by aircraft body type. Highcharts provides cool interactive charts, while Flexmonster allows you to configure new slices and change visualizations right on the fly. Flight Geography Here, we inspect the geography of flights to spot which flight directions are more relevant for this airport. Over half of the flights connect the airport with Asian countries, and another quarter with European ones. Landed Weight The last aspect is the landed weight. Firstly, we examine values by each month of the entire period of observations. Note that dates are provided in the format "YYYYMM" in the dataset, so they are presented the same way on the horizontal axis in the visualization. I want to see the shape of the values shift, so I chose a line chart for it. It is more concise than a column chart when we work with a lot of data. So, we see how the landing count changed from 2005 to 2018 and notice profitable and unfavorable months. Another statistic is quite distinctive. It is a correlation (i.e., the strength of the relationship) between landed weight and landing count. This value ranges from -1 to 1. The closer it is to 1, the more one value depends on another. Finally, the last chart tells us the flight efficiency dynamic year by year. Full Demo Link You can check out the full Next.js demo app and explore the dashboard yourself on my GitHub. Final Words In this tutorial, we have created an interactive statistical dashboard, analyzed the work subject, and processed the dataset. We have also noticed some specifics in configuring Highcharts and working with the Flexmonster table. Don’t forget that in order to use the full potential of these components. To sum it up, I really enjoyed composing analytics in the field of flights. Its data turns out to represent lots of versatile features. Additionally, I was surprised by the lack of appropriate tutorials on Next.js. So, I did this article and hope it will help you share analytical data easily. I wish you good luck in your future search!
Anthony Gore
Founder,
Vue.js Developers
John Vester
Staff Engineer,
Marqeta @JohnJVester
Justin Albano
Software Engineer,
IBM