Latest Posts

Featured

A Powerful Docker Container for an Nx Workspace Application

Discover how to easily create a Docker container for an Nx Workspace application with this step by step guide to creating a powerful site deployable in seconds with Docker

In a previous post, I briefly described the Nx Workspace and how to create Angular applications and libraries with Nrwl Extensions. I wanted the ability to run a prod build of the app in Docker for Windows so here is just one way of accomplishing that. With the Nx Workspace setup already I had to add just a few more files. This article assumes an Nx Workspace exists with an app named “client-demo”. It follows a similar approach to creating a static website using Docker. This article describes how to create a simple Docker container for an Nx Workspace Application.

NGINX

Using nxginx instead of a nanoserver due to size (~16 MB compared to 1+ GB) a nginx.conf file was needed. Place the file at the root of the Nx Workspace (the same level as the angular.json file):

// nginx.conf

worker_processes 1;

events {
worker_connections 1024;
}

http {
server {
listen 80;
server_name localhost;

root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;

gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/css application/javascript;

location / {
try_files $uri $uri/ /index.html;
}
}
}

Dockerfile

It is now time for the Dockerfile. This file acts as a sort of definition file for a Docker Image. Place this file at the same level as the nginx.conf file:

// Dockerfile

FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/apps/client-demo .

Docker Compose

The Dockerfile is created. To use Docker Compose, create a docker-compose.yml file at the same level as the Dockerfile:

// docker-compose.yml

version: '3.1'

services:
app:
image: 'client-demo-app'
build: '.'
ports:
- 3000:80

Docker Ignore

When creating a Docker Image not every file is needed. In this case, only the dist/ folder is really needed. Using a .dockerignore file can help keep files and directories out of the build context. Place this file at the same level as the Dockerfile:

// .dockerignore

node_modules
.git
libs
tools
apps

Package.json

To leverage the files that have been created scripts can be added to the package.json file. This file should already exist within the Nx Workspace. Simply add the following scripts:

// package.json

...
"scripts": {
...
"client-demo-build": "ng build client-demo --prod",
"client-demo-image": "docker image build -f Dockerfile.client-demo -t client-demo-app .",
"client-demo-run": "docker-compose -f docker-compose.client-demo.yml up",
"client-demo-stop": "docker-compose -f docker-compose.client-demo.yml down",
"client-demo": "yarn client-demo-build && yarn client-demo-image && yarn client-demo-run"
},
...

Each of these scripts can run with npm run <script> or yarn <script>.

client-demo-build: This script runs ng build with the –prod flag to create a prod build of the Angular app.

client-demo-image: This script builds the client-demo-app image given a specific Dockerfile named Dockerfile.client-demo.

client-demo-run: This script uses docker-compose to run the app with docker-compose up. A specific file is specified with the ‘-f’ flag named docker-compose.client-demo.yml.

client-demo-stop: This script acts as the opposite of docker-compose up. As long as this script runs after the client-demo-run script, the app can be started and stopped any number of times.

client-demo: This script simply chains the execution of other scripts to create the prod build of the Angular app, create the Docker image, and serve the app. As it is written, yarn is required.

After creating the Nx Workspace, creating the Docker support files, adding the scripts to package.json and running npm run client-demo or yarn client-demo access the app from a browser at http://localhost:3000.

Docker Container for an Nx Workspace Application viewable from a browser
Default Nx Workspace application

Run npm run client-demo-stop or yarn client-demo-stop to stop the app.

Featured

How to Easily Create a Static Website With Docker

Discover how to easily create a static website with Docker that can be viewed from a browser

The goal of this article is to describe a process for serving static web files from a Docker Container. It is surprisingly easy to create a static website with docker.

The website structure is very simple and consists of only 3 files:

./site/
  style.css
  app.js
  index.html

At the project root there is a Dockerfile:

./
  Dockerfile

The website displays “Loading” text. When the JavaScript file is loaded, Hello World is displayed in big red letters:

Static Website With Docker serving Hello World "Loading" view
Hello World “Loading” view

Here is the HTML:

<html>
  <head>
    <title>Sample Website</title>
    <script src="app.js"></script>
    <link href="style.css" rel="stylesheet" />
  </head>
  <body>Loading</body>
</html> 

Here is the Dockerfile:

FROM nanoserver/iis
COPY ./site/ /inetpub/wwwroot/ 

The lines in the Dockerfile are key to getting the webserver image created. This file allows us to create a new docker image. The image is used to run a docker container.

The first line specifies the base image. In this case, it is an image with a configured Nano Server with IIS. There are smaller webserver images that are usually preferable.

The second line will copy the local project files from the ‘site’ folder to the wwwroot folder of the nanoserver image.

That is everything needed to get a web server started to serve the web page. To create the image, start with docker build:

> docker build -t webserver-image:v1 .

The docker build command is used to create an image. When it is executed from a command line within the directory of a Dockerfile, the file will be used to create the image. The -t option allows the ability to name and optionally tag the image. In this case, the name is “webserver-image” with the “v1” tag. Tags are generally used to version images. The last argument is the path used to build the image. In this case, it is . which is the current directory.

Running the command will build the image:

> docker build -t webserver-image:v1 .
Sending build context to Docker daemon 26.11kB
Step 1/2 : FROM nanoserver/iis
---> 7eac2eab1a5c
Step 2/2 : COPY ./site/ /inetpub/wwwroot/
---> fca4962e8674
Successfully built fca4962e8674
Successfully tagged webserver-image:v1

The build succeeded. This can be verified by running docker image ls:

> docker image ls
REPOSITORY      TAG IMAGE ID     CREATED       SIZE
webserver-image v1  ffd9f77d44b7 3 seconds ago 1.29GB

If the build doesn’t succeed, there may be a few things to double-check. This includes making sure the Dockerfile is available, nanoserver images can be pulled, and paths are accurate.

Now that an image is created, it can be used to create a container. This can be done with the docker run command:

> docker run --name web-dev -d -it -p 80:80 webserver-image:v1

After running the command, the container id will be displayed:

> docker run --name web-dev -d -it -p 80:80 webserver-image:v1
fde46cdc36fabba3aef8cb3b91856dbd554ff22d63748d486b8eed68a9a3b370

A docker container was created successfully. This can be verified by executing docker container ls:

> docker container ls
CONTAINER ID IMAGE              COMMAND                  CREATED
STATUS        PORTS              NAMES
fde46cdc36fa webserver-image:v1 "c:\\windows\\system32…" 31 seconds ago
Up 25 seconds 0.0.0.0:80->80/tcp web-dev

The container id is displayed (a shorter version of what was shown when executing docker run). The image that was used for the container is also displayed along with when it was created, the status, port, and the container name.

The following docker inspect command will display the IP address:

> docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" web-dev
172.19.112.171

This IP address is what can be called in a browser to view the page:

Hello World Loading
Hello World “Loading” view

There is now a working container that serves the web page!

I learn by doing and found that most of us in tech do. That is why I got Manning Publications’ Docker in Action to learn Docker using their step-by-step instructions and immediately actionable information to apply to enterprise-level projects.

Their “In Action” series takes the reader on an active journey by way of doing. After learning the details of using Docker to release enterprise-level software I wanted to be sure I understood the concepts and practices behind the delivery. Manning Publications has another book called Docker in Practice. Their “In Practice” series dives deep into the concepts presented by the technology. Together, Docker in Action and Docker in Practice create a well-rounded course in leveraging Docker effectively.

Python File Organization: A Comprehensive Guide

Learn how to efficiently sort, filter, and move files using Python, streamlining your workflow and boosting productivity.

In this post, we will explore the powerful capabilities of Python for efficiently organizing files. Proper file organization is crucial for maintaining a structured and accessible file system. Python provides a wide range of tools and techniques to help us achieve this. By leveraging Python’s functionality, we can easily sort, filter, and move files based on specific criteria such as file type, size, and date of creation. This article aims to guide you through the process of organizing your files using Python, empowering you to enhance your productivity and streamline your file management workflow.

Understanding File Properties

To effectively organize files, it is essential to understand their properties, such as file type, size, and date of creation. These properties provide valuable information for sorting and filtering files. In Python, we can utilize the versatile os module to access and manipulate these properties programmatically. Let’s take a look at an example that demonstrates how to retrieve the file properties of a given file using the os.path module:

import os

file_path = 'path/to/file.txt'

# Get file type
file_type = os.path.splitext(file_path)[1]

# Get file size in bytes
file_size = os.path.getsize(file_path)

# Get date of creation
creation_date = os.path.getctime(file_path)

print(f"File Type: {file_type}")
print(f"File Size: {file_size} bytes")
print(f"Creation Date: {creation_date}")

In the above example, we use the os.path.splitext() function to extract the file extension, os.path.getsize() function to get the file size in bytes, and os.path.getctime() function to obtain the creation date of the file. Understanding these file properties will be crucial as we proceed through the article.

Sorting Files

Sorting files is a fundamental aspect of file organization. Python offers various techniques to sort files based on different properties. Let’s explore how we can sort files based on file type, size, and date of creation using Python.

Sorting by File Type

To sort files by their types, we can use the sorted() function along with the key parameter and a lambda function. Here’s an example that demonstrates how to sort files in a directory based on their file types:

import os

directory = 'path/to/files/'

files = os.listdir(directory)
sorted_files = sorted(files, key=lambda x: os.path.splitext(x)[1])

for file in sorted_files:
    print(file)

In this example, the os.listdir() function retrieves a list of files in the specified directory. The sorted() function sorts the files based on their file extensions, extracted using the os.path.splitext() function. Finally, we iterate through the sorted files and print their names.

Sorting by File Size

To sort files based on their sizes, we can utilize the sorted() function with the key parameter and the os.path.getsize() function. Here’s an example that demonstrates sorting files by size in descending order:

import os

directory = 'path/to/files/'

files = os.listdir(directory)
sorted_files = sorted(files, key=lambda x:
   os.path.getsize(os.path.join(directory, x)), reverse=True)

for file in sorted_files:
    print(file)

In this example, the os.path.getsize() function is used within the lambda function to retrieve the size of each file. The reverse=True argument ensures that the files are sorted in descending order of size.

Sorting by Date of Creation

To sort files based on their dates of creation, we can again use the sorted() function with the key parameter and the os.path.getctime() function. Here’s an example that demonstrates sorting files by creation date in ascending order:

import os

directory = 'path/to/files/'

files = os.listdir(directory)
sorted_files = sorted(files, key=lambda x: 
   os.path.getctime(os.path.join(directory, x)))

for file in sorted_files:
    print(file)

In this example, the os.path.getctime() function retrieves the creation time of each file. The files are sorted in ascending order by creation date.

Filtering Files

Filtering files allows us to extract specific subsets of files based on defined criteria. Python’s list comprehension feature offers a concise and powerful way to filter files efficiently. Let’s explore how we can filter files based on specific attributes using Python.

Filtering by File Extension

To filter files based on their extensions, we can use a list comprehension. Here’s an example that demonstrates how to filter files in a directory to only include text files:

import os

directory = 'path/to/files/'

files = os.listdir(directory)
text_files = [file for file in files if file.endswith('.txt')]

for file in text_files:
    print(file)

In this example, the list comprehension filters the files by checking if each file name ends with the ‘.txt’ extension.

Filtering by File Size

To filter files based on their sizes, we can combine list comprehensions with the os.path.getsize() function. Here’s an example that filters files to only include those larger than a specified size:

import os

directory = 'path/to/files/'
min_size = 1024  # Minimum file size in bytes

files = os.listdir(directory)
filtered_files = [file for file in files if 
   os.path.getsize(os.path.join(directory, file)) > min_size]

for file in filtered_files:
    print(file)

In this example, the list comprehension filters the files by comparing their sizes with the min_size variable.

Moving Files

Once files are sorted and filtered, it’s often necessary to relocate them to different directories for improved organization. Python’s shutil module provides us with convenient functions for moving files. Let’s see how we can move files from one directory to another using Python.

import os
import shutil

source_directory = 'path/to/source/'
destination_directory = 'path/to/destination/'

files = os.listdir(source_directory)

for file in files:
    source_path = os.path.join(source_directory, file)
    destination_path = os.path.join(destination_directory, file)
    shutil.move(source_path, destination_path)

In this example, we iterate through the files in the source directory, obtain the source and destination paths for each file, and use the shutil.move() function to move the files to the specified destination directory.

Practical Example: Automating File Organization

Now let’s walk through a practical example that demonstrates how to automate file organization based on specific criteria. For instance, we can create a script that separates images, documents, and music files into dedicated folders.

  1. Create a new file called ‘organize.py’ in a folder of your choice.
  2. Place the following code in the file and update the directory paths to fit your needs:
import os
import shutil

# directory paths
source_directory = 'path/to/files/'
image_directory = 'path/to/images/'
document_directory = 'path/to/documents/'
music_directory = 'path/to/music/'

# gather the list of files from the source directory
files = os.listdir(source_directory)

# iterate through the files in the source directory
for file in files:
    # move images to the image directory
    if file.endswith(('.jpg', '.png', '.gif')):
        shutil.move(os.path.join(source_directory, file), image_directory)

    # move documents to the document directory
    elif file.endswith(('.pdf', '.doc', '.txt')):
        shutil.move(os.path.join(source_directory, file), document_directory)

    # move music to the music directory
    elif file.endswith(('.mp3','.wav', '.flac')):
        shutil.move(os.path.join(source_directory, file), music_directory)
  1. Save the file.
  2. Run the file using the command: python.exe organize.py
  3. You could use Windows Task Scheduler to run this script on a set schedule automatically.

Conclusion

Python offers a comprehensive set of tools and techniques for organizing files effectively. By leveraging Python’s capabilities, we can sort, filter, and move files based on various criteria, bringing order to our file systems. The skills gained through file organization using Python are not only valuable in improving productivity and maintaining a structured workflow but also extend to other Python tasks and automation scenarios. Embracing efficient file organization practices will undoubtedly enhance your programming experience and enable you to maximize your efficiency when dealing with large volumes of files.

We encourage readers to dive into the provided examples and embark on their own file organization journey using Python. Share your experiences and challenges in the comments, and let us know how organizing files with Python has benefited your workflow. We also invite you to suggest any specific topics or questions you would like to see covered in future articles. Our aim is to create an interactive and engaging community where we can learn and grow together. So, don’t hesitate to join the conversation and contribute your thoughts and ideas. Together, we can harness the power of Python for efficient file organization and beyond.

File System Automation: File Operations in Python

Learn how to automate file operations in Python and boost your efficiency. Discover the techniques, code examples, and best practices for file system automation.

Welcome to the world of file system automation with Python! In this guide, we’ll look at file manipulation. Whether you’re a software architect, engineer, or consultant, mastering file system automation can significantly enhance your career by boosting efficiency, reducing errors, and increasing productivity. Let’s dive in and uncover the power of Python in file operations.

Find this post and more in my Substack newsletter CodeCraft Dispatch.

Opening and Closing Files in Python

It is crucial when working with files to understand how to open and close them. In Python, we use the open() function to open a file in different modes, such as reading, writing, or appending. Let’s explore these modes:

  • Reading a file: We can use the open() function with the ‘r’ mode to read the content of a file. Here’s an example:
with open('data.txt', 'r') as file:
    content = file.read()
    print(content)
  • Writing to a file: To write to a file, we use the ‘w’ mode in the open() function. This allows us to create a new file or overwrite the existing content. Consider the following code snippet:
with open('output.txt', 'w') as file:
    file.write("Hello, world!")

It’s important to close files after we’re done with them. Python provides a convenient way to ensure automatic file closure using the with statement. This guarantees that the file is properly closed, even if an exception occurs during the file operations.

Reading from Files

Python offers various methods to read from files based on our requirements. Let’s explore a few common techniques:

Reading an entire file at once: We can use the read() method to read the entire content of a file as a single string. Here’s an example:

with open('data.txt', 'r') as file:
    content = file.read()
    print(content)

Reading line by line: If we want to process a file line by line, we can use the readline() method or the readlines() method to read all the lines into a list.

  • Here’s an example of using readline():
with open('data.txt', 'r') as file:
    line = file.readline()
    print(line)
  • Here’s an example of using readlines():
with open('data.txt', 'r') as file:
    lines = file.readlines()
    for line in lines:
        print(line)

Writing to Files

Writing to files allows us to store data or generate output. Let’s explore the different techniques:

  • Writing to a file: We use the write() method to write content to a file. Here’s an example:
with open('output.txt', 'w') as file:
    file.write("Hello, world!")
  • Appending to a file: To append content to an existing file, we can open it in append mode ‘a’ and use the write() method. Consider the following code snippet:
with open('output.txt', 'a') as file:
    file.write(" Appending new content.")

Working with Binary Files

Python also allows us to work with binary files. Binary files contain non-textual data, such as images, audio, and video. Let’s explore some ways you can handle binary files in Python:

  • Reading from a binary file: We open a binary file in read mode ‘rb’ and use the read() method to process the data. Here’s an example:
with open('image.jpg', 'rb') as file:
    data = file.read()

Process Binary Data

  • Writing to a binary file: To write binary data to a file, we open it in write mode ‘wb’ and use the write() method. Consider the following code snippet:
with open('image.jpg', 'wb') as file:
    # Generate or obtain binary data
    file.write(data)

Error Handling

File operations can encounter errors, such as file not found, permission denied, or disk full. It’s essential to handle these errors gracefully. Python offers the try-except block to catch and handle exceptions. Let’s see how it works:

try:
    with open('data.txt', 'r') as file:
        content = file.read()
        # Perform operations on the content
except FileNotFoundError:
    print("The file does not exist.")
except PermissionError:
    print("You don't have permission to access the file.")
except Exception as e:
    print(f"An error occurred: {str(e)}")

Practical Examples: Combining Reading and Writing Operations

Let’s illustrate file system automation with a practical example of reading and writing files. Consider the scenario where we need to process customer data from a CSV file and generate a summary report. We can achieve this by utilizing the concepts we’ve covered so far, such as reading, processing, and writing data. The following code snippet illustrates the approach:

with open('customers.csv', 'r') as read_file, open('report.txt', 'w') as write_file:
    # Read customer data from the CSV file
    # Process the data and generate the report
    # Write the report to the output file

Conclusion

In this article, we’ve explored the ins and outs of file system manipulation in Python. Python provides a wide range of tools for handling files. Software professionals can unlock new levels of efficiency with these techniques.

Get Involved!

Now, it’s time for you to take action! Try out the code examples, experiment with different file operations, and share your experiences with us. We’d love to hear your thoughts and answer any questions you may have. Stay tuned for more articles on automation and other exciting topics. Happy coding!

Unlock Powerful File System Automation with these 12 Methods

Learn how to unlock the power of file system automation in Python using the ‘os’ and ‘shutil’ modules. Examine 12 practical Python methods for creating, reading, writing, deleting files, and managing directories.

Welcome to our comprehensive guide on file system automation in Python. Python gives us powerful tools for effective file and directory management. In this article, we will explore 12 methods for automating file system tasks using Python. These methods can help automate tasks, improve automated workflows, and optimize file system operations.

Find this post and exclusive content in my Substack newsletter CodeCraft Dispatch.

Introduction to Python’s os and shutil Modules

To unlock the full potential of file system automation, we’ll leverage Python’s built-in ‘os’ and ‘shutil’ modules. These modules provide a variety of functions and methods for interacting with files, directories and the OS. Let’s dive in.

Method 1: Creating New Files

Creating a new file is a common file system operation. With Python’s ‘open()’ function, we can easily create a new file and specify the desired file mode. Here’s an example:

import os

file_path = "path/to/new_file.txt"
file = open(file_path, "w")
file.close()

Method 2: Reading File Content

Python’s ‘open()’ function, combined with methods like ‘read()’, ‘readline()’, or ‘readlines()’, allows us to access the data within a file. Here’s an example that reads a file one line at a time:

file_path = "path/to/file.txt"
with open(file_path, "r") as file:
    for line in file:
        print(line)

Method 3: Writing to Files

Writing data to a file is essential for storing output. Using the ‘open()’ function with the write mode (‘w’ or ‘a’), we can write content to a file. If the file already exists, opening it in write mode will overwrite its contents. Here is an example:

file_path = "path/to/file.txt"
with open(file_path, "w") as file:
    file.write("Hello, World!")

Method 4: Deleting Files

Python’s ‘os’ module provides the ‘remove()’ function to delete a file. Here’s how you can use it:

import os

file_path = "path/to/file.txt"
os.remove(file_path)

Method 5: Creating Directories

Creating directories is an essential element of maintaining an organized file system. Python’s ‘os’ module offers the ‘mkdir()’ function to create directories. Let’s see an example:

import os

directory_path = "path/to/new_directory"
os.mkdir(directory_path)

Method 6: Listing Directory Contents

To obtain a list of files and directories within a directory, we can use the ‘os.listdir()’ function. Here’s some code that shows this:

import os

directory_path = "path/to/directory"
contents = os.listdir(directory_path)
for item in contents:
    print(item)

Method 7: Renaming Directories

Sometimes, we may need to rename directories to support consistency or reflect updated information. Python’s ‘os’ module provides the ‘rename()’ function for this purpose. Here is an example:

import os

old_directory_path = "path/to/old_directory"
new_directory_path = "path/to/new_directory"
os.rename(old_directory_path, new_directory_path)

Method 8: Deleting Directories

To remove an entire directory, Python’s ‘os’ module offers the ‘rmdir()’ function. It’s important to note that the directory must be empty for this function to work. Here’s an example:

import os

directory_path = "path/to/directory"
os.rmdir(directory_path)

Method 9: Moving Files

Moving files from one location to another is a common file system operation. With the ‘shutil’ module, we can easily accomplish this using the ‘move()’ function. Here’s an example:

import shutil

source_file = "path/to/source.txt"
destination_file = "path/to/destination.txt"
shutil.move(source_file, destination_file)

Method 10: Copying Files

Creating copies of files is another essential file system operation. The ‘shutil’ module provides the ‘copy()’ function which allows us to create duplicates of files. Consider the following example that shows the code to copy from one file to another:

import shutil

source_file = "path/to/source.txt"
destination_file = "path/to/destination.txt"
shutil.copy(source_file, destination_file)

Method 11: Copying Directories

Python’s ‘shutil’ module also provides the ‘copytree()’ function, allowing us to create copies of entire directories. This function recursively copies all files and subdirectories within the specified directory. Here’s an example:

import shutil

source_directory = "path/to/source_directory"
destination_directory = "path/to/destination_directory"
shutil.copytree(source_directory, destination_directory)

Method 12: Error Handling in File and Directory Operations

When working with files and directories, it’s crucial to handle potential errors gracefully. Common errors include FileNotFoundError, PermissionError, and OSError. Python supplies error handling mechanisms, such as try-except blocks, to catch and handle these errors. Here’s an example:

import os

file_path = "path/to/file.txt"
try:
    with open(file_path, "r") as file:
        # Perform operations on the file
except FileNotFoundError:
    print("The file does not exist.")
except PermissionError:
    print("Permission denied.")
except Exception as e:
    print(f"An error occurred: {e}")

Practical Examples of File and Directory Operations

Let’s explore a couple of practical applications of file and directory operations.

Automated File Backup: Using Python, you can create a script that regularly backs up important files and directories.

File Sorting: Suppose you have a directory with various files of diverse types. You can use Python to move these files into separate directories based on their file extensions automatically.

These examples suggest just a few ways Python’s file system operations can automate tasks and boost productivity.

Conclusion

In this comprehensive technical article, we discussed:

  • The basics of file and directory operations in Python using the ‘os’ and ‘shutil’ modules.
  • How to create, read, write, and delete files, as well as create, list, rename, and delete directories.
  • How to move and copy files and directories using the ‘shutil’ module.

Master these concepts for a solid foundation in file system automation using Python.

Try out new things and read the official documentation to deepen your understanding of Python file system automation. Harness the power of Python to automate tasks, organize data, and streamline your workflows.

Get Involved!

We value your feedback and look forward to hearing about your experiences with file system operations in Python. If you have questions, insights, or challenges related to the topics discussed in this article, please share them in the comments section below. Your contributions can help create a supportive community. Don’t forget to share this article with others who might find it helpful.

File System Automation : How to Boost Efficiency with Python

Explore how file system automation using Python can boost your efficiency. Discover Python’s automation capabilities and practical use cases.

Introduction

In this post, we’ll explore file system automation using Python. We’ll explore why it’s important to software development and how it can help developers.

What is File System Automation?

This is when software automatically performs tasks related to files and directories. It cuts manual intervention and increases efficiency. For example, it can clean up and organize files, automate testing and deployment, and process data efficiently.

Why Python for File System Automation?

Python is a powerful and flexible programming language, widely used for automation. It supplies libraries and modules that make file and directory operations easier. It’s great for automating tasks that happen repeatedly, is easy to understand and has many resources.

Capabilities for File System Automation

Python provides built-in modules like os and shutil for working with files and directories. These modules allow for file creation, reading, writing, and deletion. For example, Python’s open() function creates a new file, read() reads its contents, and os.path.exists() checks if a file or directory exists.

Practical Uses of File System Automation with Python

  • It can create regular backups of directories for data safety.
  • Rename files with a consistent naming convention for improved search and organization.
  • Compress files into a single archive for efficient sharing and storage.

These are just a few ways Python can solve real-world problems.

What to Expect in This Series

We’ll dive deeper into Python’s capabilities for working with files and directories. Here’s an outline of the topics we’ll cover:

  1. Working with Files and Directories
  2. Read and Write Files in Python
  3. Organize Your File System with Python
  4. Advanced File System Operations
  5. Automate Routine File System Tasks with Python Scripts
  6. Schedule Your Python Scripts

Each post builds on the foundations laid in this article. They supply practical knowledge and empower you to automate file and directory tasks effectively.

Conclusion

File system automation with Python enhances productivity for developers. Python’s capabilities automate repetitive tasks, enable effortless data manipulation, and streamline software development workflows. Learn more about file and directory automation. Explore Python’s documentation. Use pip to install required modules like os and shutil. Engage with the Python community. And stay tuned for upcoming posts where we’ll dive into the intricacies of file and directory automation with Python.

Get Involved!

We want to hear about your experiences with Python automation. Share your thoughts in the comments and let us know how file and directory automation has affected your work. If you found this article helpful, please consider sharing it.

Unlock the Power of Python: Download Files Easily

Dive into the world of Python as we explore a simple but incredibly useful task: downloading files from the internet. Whether you’re a beginner or an experienced developer, learn how to boost your skills with our step-by-step guide.

Hello, folks! Today we’re diving into an exciting topic that’ll boost your Python skills, no matter if you’re just starting or have years of experience under your belt. We’ll explore how to download files from the internet using Python v4, a simple but incredibly useful task. This isn’t just another dry tutorial, but a journey into the world of Python, perfect for anyone with an appetite for learning and a zest for coding.

Python: Your Swiss Army Knife for Web Data

Python has steadily grown in popularity over the years, and for good reason. It’s versatile, powerful, and, best of all, easy to learn. One of its many applications is web data extraction, which can be anything from scraping text data from websites to downloading files hosted online.

Today, we’re focusing on the latter. So, sit tight and get ready to add another tool to your Python arsenal.

The Task at Hand: Downloading a SEC Edgar Company Fact Data File

We have a specific file we’re interested in: the SEC Edgar Company Fact data zip file, located on the SEC’s site. Our challenge is to download this file using Python, but with a twist – we need to include a specific header in our request so the SEC data wizards won’t block our request. This header will be in the format of ‘User-Agent’: {first_name} {last_name} {email_address}. So, let’s roll up our sleeves and get coding.

Starting with the Basics: Importing the requests Library

The first step in our Python script is to import the requests library.

import requests

requests is a popular Python library used for making HTTP requests. It abstracts the complexities of making requests behind a beautiful, simple API, allowing you to send HTTP/1.1 requests with ease. There’s no need to manually add query strings to your URLs or form-encode your POST data.

Defining Our Target: The URL and Headers

Next, we need to define the URL of the file we want to download and the headers we will include in our request. In our case, the URL is a direct link to the zip file we’re after.

# Define the URL of the file you want to download
url = "https://www.sec.gov/Archives/edgar/daily-index/xbrl/companyfacts.zip"

Headers let the server know more about the client making the request. Here, we’re adding a ‘User-Agent’ header, which typically includes details like the application type, operating system, software version, and software vendor. It’s used to let the server know more about the client making the request.

# Define your headers
headers = {
    'User-Agent': 'YourFirstName YourLastName YourEmailAddress@example.com'
}

Just replace ‘YourFirstName’, ‘YourLastName’, and ‘YourEmailAddress@example.com‘ with your actual first name, last name, and email address.

Making the Request: The GET Method

Now comes the exciting part: sending our GET request to the URL.

# Send a GET request to the URL
response = requests.get(url, headers=headers)

In HTTP, a GET request is used to request data from a specified resource. With requests.get(), we’re sending a GET request to the URL we specified earlier, with the headers we defined.

Handling the Response: Checking the Status and Writing the File

After making our request, we need to handle the response and ensure the request was successful. This is where the HTTP response status code comes into play.

HTTP response status codes indicate whether a specific HTTP request has been successfully completed. A status code of 200 means that the request was successful, and the requested resource will be sent back to the client.

Once we’ve confirmed the request was successful, we can go ahead and write the content of the response (our file) to a local file.

# Make sure the request was successful
if response.status_code == 200:

    # Open the file in binary mode and write the response content to it
    with open('companyfacts.zip', 'wb') as file:
        file.write(response.content)
else:
    print(f"Failed to download file, status code: {response.status_code}")

Here, we’re using Python’s built-in open() function to open a file in binary mode. We’re then writing the content of the response to this file. If there was an issue with the request (indicated by a status code other than 200), we print an error message.

And voilà! You’ve just downloaded a file from the web using Python. This approach isn’t just limited to our SEC Edgar Company Fact data file – you can apply the same method to download any file from the internet using Python.

A Word of Caution

Before we wrap up, it’s important to note that you should always ensure you have the rights to download and use the data you’re accessing. Always comply with the terms of service associated with the data source. Responsible and ethical data usage is key in any data-related task.

Wrapping Up

Today we’ve unlocked a powerful tool in Python’s arsenal: downloading files from the web. We’ve not only walked through the code but also explored the why behind it, providing you with a deeper understanding of the task at hand.

Whether you’re a Python newbie or an experienced developer, we hope you found value in this post. Python’s simplicity and power make it a go-to language for a wide range of tasks, and we’re excited to see what you’ll do with it next.

Stay tuned for more Python adventures. And as always, happy coding!

Please enable JavaScript in your browser to complete this form.
What type of programming topics are you most interested in learning more about?

Revolutionize Your Code: Python 4’s Magic With ConfigParser

Explore how to revolutionize your Python 4 code with the magic of ConfigParser. This detailed guide will walk you through managing app settings with ease.

If you’ve been programming with Python, you’ve likely run into scenarios where you need to manage application settings. Perhaps you’re juggling a slew of URLs, and you’d like a more elegant solution than hard-coding these in your script. Or maybe you’re dealing with sensitive information that you can’t afford to expose. This is where ConfigParser comes in – a handy Python module that provides a structured way to manage application settings. And today, we’ll walk you through how to leverage it.

A Brief Background on SEC Edgar Company Fact URL

Before we plunge into the code, let’s give a bit of context. We’ll use a URL from the Securities and Exchange Commission’s (SEC) EDGAR system as our example. EDGAR is an electronic system developed by the SEC to increase the efficiency and fairness of the securities market for the benefit of investors, corporations, and the economy by accelerating the receipt, acceptance, dissemination, and analysis of time-sensitive corporate information filed with the agency. The URL we’ll be dealing with leads to a company facts zip file, a treasure trove of valuable information.

Cracking Open the ConfigParser

Enough of the context, let’s dive into the code. Python’s ConfigParser module enables us to write Python programs with configurable options that can be specified via configuration files or as command line arguments.

Let’s start with a basic configuration file, which we’ll call config.ini. Here’s what it might look like:

[SEC_Edgar]
Company_Facts_Zip_URL = https://www.sec.gov/Archives/edgar/daily-index/xbrl/companyfacts.zip

In this configuration file, we have one section (SEC_Edgar) and one option (Company_Facts_Zip_URL) that is set to the URL of the SEC Edgar Company Fact zip file.

Reading the Configuration File

Now, onto the Python script. Here’s how you can read the config.ini file:

import configparser

config = configparser.ConfigParser()
config.read('config.ini')

url = config.get('SEC_Edgar', 'Company_Facts_Zip_URL')
print(url)  # https://www.sec.gov/Archives/edgar/daily-index/xbrl/companyfacts.zip

Breaking down the script, we first import the configparser module. Next, we create an instance of ConfigParser and read our configuration file using the read method. Then, we retrieve the URL using the get method, specifying the section and the option. Finally, we print the URL.

Wrapping Up

And there you have it – a quick and effective way of managing app settings in Python using ConfigParser. This versatile module can handle a variety of scenarios beyond what we’ve covered today, making it a valuable tool in any Python programmer’s toolkit.

Enjoyed this post? Want to dive deeper into Python programming? Don’t forget to subscribe to our blog for more insightful content and follow us on our social media channels for updates.

Please enable JavaScript in your browser to complete this form.
What type of programming topics are you most interested in learning more about?

How to Easily Overlap HTML Elements Without Position CSS

Discover how to easily overlap HTML elements in one simple way by leveraging the power of the CSS grid layout.

Sometimes an approved user interface design requires us to overlap HTML elements. This article describes how to do this with just HTML and CSS using the CSS grid layout.

The primary aim of the examples in this article is to prove a point. They may meet your requirements with similar code but may not fit exactly.

HTML

Starting with the definition of the HTML, we need at least three elements: the grid container, the background element, and the foreground element.

<div class="grid-container">
  <div class="background">
    This is the background
  </div>
  <div class="foreground">
    This is the foreground
  </div>
</div>

The grid container wraps each of the elements that need to be overlapped. Meanwhile, the child elements have classes specifying where in the grid the browser should place them.

Thereupon we can control which elements overlap implicitly via the order of HTML elements specified in the grid container. Conversely, we can explicitly control them with the z-index CSS property.

CSS

Although the HTML has been defined, the CSS still needs to be specified. The CSS is more involved and includes detail about the CSS grid. Therefore, understanding these details will allow you to change this code to meet your requirements.

.grid-container {
  display: grid;
  grid-template-columns: 1fr 1fr 1fr 1fr;
  grid-template-rows: 1fr 1fr 1fr 1fr;
  
  .background {
    grid-column: 1 / 5;
    grid-row: 1 / 4;
  }
  .foreground {
    grid-column: 2 / 4;
    grid-row: 3 / 5;       
  }
}

The Grid to Overlap HTML

The grid container specifies the definition of a 4 x 4 grid. I am using fr units for ease of use in this article. Your requirements may differ. Learn more about the flex value here: https://developer.mozilla.org/en-US/docs/Web/CSS/flex_value.

Firstly, a grid’s definition includes setting the display property to grid. Additionally, the grid-template-columns and grid-template-rows CSS properties define the rows, columns and their sizes. Learn more about CSS grid layout at: https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout

Furthermore, we still need to specify where elements are placed within the grid. As an illustration using the CSS above, the background class positions an HTML element behind the foreground. Do this by setting the shorthand properties grid-column and grid-row to span more of the grid than the foreground.

To clarify, the grid-column property specifies the horizontal start and end edges of the HTML element within the grid.

Similarly, the grid-row property specifies the vertical start and end edges of the HTML element within the grid.

Remember: The grid’s edges start at 1, yet the end is one more than the number of rows or columns in the grid. We are specifying an edge not a cell.

In addition to the background class, we need to create the foreground class. The foreground class positions an element in front of the background element. Much like the background class, do this using the grid-column and grid-row properties.

Even though a requirement to overlap HTML elements can mean several things, adjusting the background class’s grid-column and grid-row properties can help position the background element to better match your requirements.

Overlap HTML

In summary, defining the CSS grid layout allows us to overlap HTML elements with just HTML and CSS. To that end, the grid-column and grid-row specify where each element is placed within the grid.

To demonstrate, the image below shows the grid lines from Google Dev Tools. You can see the 4 x 4 grid and the positions of the HTML elements.

See the grid lines and how CSS grid is used to overlap HTML elements
The grid lines and the position of the HTML elements.

See a working example using jsfiddle below or https://jsfiddle.net/asusultra/5cho1p2g/2/.

Note: You may notice that I use a few CSS properties for colors, height, and width. This is only for display purposes.

Setup a Flashdrive with QubesOS in 5 Easy Steps

Discover how to setup a Flashdrive with QubesOS and take it anywhere

I live in a world of computers with Microsoft Windows operating systems from DOS to Windows 10. Conversely, Macs were an old-school (literally, at school) way of computing. Likewise, Linux was a tech genius with nefarious undertones. I’m happy to say this bias is transforming and I’m enjoying it. This article will describe how to setup a Flashdrive with QubesOS.

I found Qubes OS during my exploration with the Raspberry Pi. Qubes OS presents itself as “a reasonably secure operating system”. This operating system runs applications in Virtual Machines (or “qubes”) creating more separation between them.

Qubes OS can also be installed on a flash drive.

This isn’t a guide to walkthrough the installation of Qubes OS. Instead, here are a few helpful resources to get Qubes OS installed while on a Windows PC. The documentation is already quite helpful.

1. Ensure System Compatibility

First, make sure your system supports Qubes OS and that it can leverage the benefits of the operating system. I never meant to install Qubes OS when I built my Windows PC and after changing some settings, my system was ready: https://www.qubes-os.org/doc/system-requirements/

2. Download the ISO File

Furthermore, assuming your system can support Qubes OS, the installation file needs to be downloaded. I suggest downloading the ISO file. This link explains in more detail how to verify the digital signature: https://www.qubes-os.org/downloads/

3. Copy the ISO to Prepare the Flashdrive with QubesOS

Although there are a few ways to install an operating system from an ISO file, you need to prepare the installation medium first. To that end, I chose a flash drive and used Rufus to help. Check out the documentation carefully – it recommends Rufus on Windows. Select “DD Image” after selecting the Qubes ISO: https://www.qubes-os.org/doc/installation-guide/#copying-the-iso-onto-the-installation-medium

4. Run the Installation on the Flashdrive with QubesOS

After preparing the installation medium, prepare the installation drive. I installed Qubes OS on a flash drive. Boot from the installation medium and make sure the device is available when you reach the Installation Summary screen. Don’t worry, you can still cancel the installation on this screen. You may not even get there if your system doesn’t yet support Qubes OS.

Tip: the installation process will ask for two passwords 1) to encrypt the drive and 2) to login. You will need to enter both after each boot up so choose them well: https://www.qubes-os.org/doc/installation-guide/#installation

5. Perform Post-Installation

This is the last step. There are a few post-installation steps to go through: https://www.qubes-os.org/doc/installation-guide/#post-installation

Enjoy your reasonably secure operating system!

Check out the Qubes OS Project source code on GitHub: https://github.com/QubesOS

A Simple Time Management Alternative With Trello

Learn how to get things done with a powerful time management alternative

We have all felt the elusiveness of time. It is hard to find the necessary time to get things done. This is especially noticed if you have obligations tied to your financial well-being. People will tell you, “you have to make the time.” What they don’t tell you is what you have to give up to do that. It doesn’t have to be this way with one simple time management alternative.

The Time Management Alternative Structure

People have been making lists to help remember things – milk at the grocery store, to clean the gutters, take out the trash, how to make that shrimp linguine everyone loved last week. But with the growing amount of life-hackery necessary to manage the growing demands on our time organization has become a key ingredient in getting things done.

With a little more structure, lists can do more than help keep your refrigerator stocked. There are free tools to help like Trello. It super-charges lists and offers a way to keep your to-dos organized. By leveraging the free version of Trello you can keep yourself organized and on track to accomplishing everything you need or want.

There are four features from Trello that make this possible. They are all offered for free.

Cards

Cards in Trello are your to-do items. Each item is represented by a card. Cards allow elaborate descriptions so you can write exactly what needs to be done. They also allow checklists to break down your tasks even further. While upgrading your plan will allow special Power-Ups that give cards even more power, I’ll focus on the free version for this article.

Lists

Lists in Trello are simply collections of Cards. Each list can be named, archived, and rearranged on a Board.

Boards

Boards in Trello are collections of Lists. The free version of Trello allows you to set background colors and images, have the recommended number of lists per board and allows you to have multiple boards.

Teams

Teams in Trello are a way to organize Boards. It may not be the best name for you. I find it helpful to think of it as a category for Board collections. The free version of Trello allows ten boards for each team. I find it helpful to create separate teams for my main priorities. For example, I have a team for managing this site. I also have teams to help manage my life at home.

These organization structures built into Trello provide a lot of potential for managing a growing to-do list. Leveraging these features effectively is important to make the most of them. I will attempt to describe a handy template I use for accomplishing goals with the help of Trello without needing calendars, reminders, or alarms.

Leveraging the Time Management Alternative

Your goals can be achieved with help from the power of Trello. The following sections will describe how I have done that and how you can too. I will start with Lists and what kind of Cards they would include. I’ll then move on to Teams and what kind of Boards they would include.

Brainstorm Lists within the time management alternative

This list contains thoughts and ideas. Each card is an item in a brainstorm. The list is a space for creative experimentation. The cards that come out of this list are then further categorized as Undecided, Not Doing, or Backlog. This is one of the most important lists. This is where all future activity begins.

Undecided List

When there are items that come out of a brainstorm that you are just not quite sure of, they go here. These would be considered later. The cards in this list are in a Trello-fueled limbo state. They may be completed later, or later it will be decided that they will not be done. The idea has been captured and we’ll decide later what to do with it.

Not Doing List

Items in this list are most likely not going to be done in the future. Each card came from a brainstorm or undecided list and was deemed unworthy to complete any further. This list is to preserve your ideas and offers you a chance to reconsider the worth of the items or fuel better ideas.

Backlog List

This list contains items that we are expecting to do in the future. When you decide that an item from the brainstorm will be done, it goes here first.

Prioritized List

The cards in this list represent items you have decided to do before others in your lists of ideas and backlog items. When you complete your current tasks, these are next. The cards in this list can be prioritized too. For example, you could order the list from top to bottom by importance. When a new item moves to in progress, it would be the card at the top of the list. Often, I would create a Proposed list placed before the Prioritized list. I would fill this list with backlog items as I prepare to prioritize them.

In Progress List

You have tasks that you are currently working on. They should be on this list. Keeping this list short is important. If everything is in progress, nothing is. Multi-tasking is a lie. I would recommend no more than three items at a time.

Complete List

Move your completed tasks to this list. You can track your progress towards your goals and celebrate each achievement along the way. Each card in this list can be reviewed or removed. I often add another list called Review to capture items that are completed but still await further analysis. This is an opportunity for continuous improvement. I recommend taking advantage of that.

Priority Teams with the time management alternative

Each priority team should represent a significant area of your life that you want to manage with the power of Trello. This could be long-term relationship goals or how to get rid of that collection of old dishware. Anything important enough to you should be made a team.

Each team would have at least two boards. One board is for ideas. This contains the first three lists: Brainstorm, Undecided, and Backlog. The other board would contain the other lists: Prioritized, In Progress, and Complete.

Notice throughout this article I haven’t set a date in Trello. While it is possible to add a date to cards and each card has an Activity log with a timestamp, there is no need to specify a date unless absolutely necessary. Keeping away from deadlines is one advantage of using Trello. I recommend using your best judgment and specify a date if it makes sense.

But couldn’t I just ignore my items?

A. Of course. You could ignore your board of items, forget your priorities, and choose not to organize your to-do items. None of those things encourage the completion of your goals.

This is a lot to set up! Is there an alternative?

A. Absolutely. What I described here works well for me. With the free version of Trello, I encourage you to experiment and find what works for you.

What if my number of free teams and boards meet the Trello maximum?

A. You can recycle any of the items in Trello. Teams, Boards, Lists, and Cards can be modified or archived and created anew. As your goals are accomplished, recycle your Trello teams.

Why Trello?

A. It is a free way to get organized and get things done. There are others that are less conducive to day-to-day activities, personal flexibility, and budget. After the year’s Black Friday and Cyber Monday shopping, why not try something that’s free?

How do I get Trello?

A. It is quick and easy. Follow the simple instructions provided by Trello to get started!

How to Leverage the Strength of Branch Policies

Create branch policies to tie your branch, pull requests, and build into a powerful automated experience

Branch policies can act as a sort of glue to combine a branch, a build, and pull requests. Many options are available to you when configuring branch policies. First, make sure you require a pull request. Next, you’ll need to create a build in Azure DevOps to leverage when configuring a build policy.

Make sure you have at least one reviewer:

Require a minimum number of reviewers for pull requests
Require a minimum number of reviewers for pull requests

Pull Requests are required and at least one approval is needed to complete them.

A build policy can be added too. Let’s do that. Click the Add build policy button and fill in the form:

Add build policy
Add build policy

The build pipeline is specified. The trigger should be Automatic. The build should be required and have an expiration. Give your build policy a name that describes its purpose. In this case, I called it Develop-Build-Policy.

Now, let’s look at other configuration options. One option is to Limit merge types. I will choose Squash Merge to help keep my Git history clean. I’ll also add myself as an automatically included code reviewer.

Branch policies setup
Branch policy setup

With a build policy added, we have an automated build set up to run after a pull request is created. When a feature branch needs to merge to develop, a pull request is required. When a pull request is created, the automated build will run. The pull request cannot be completed (which would cause a merge to the develop branch) until it receives approval from at least one required approver and the automated build succeeds.

Pull Request Policy status
Pull Request Policy status

Assuming the in-progress build succeeds, I could approve this pull request which would allow it to complete. After completing the pull request, the code in my feature branch would merge to the develop branch.