Centos AMI on Amazon EC2 – How To Resize The Disk

When using the official CentOS Linux AMI on EC2, it comes (currently) with only an 8GB partition.  Want to resize that?  Here is how…

First make sure you start a small (t2.micro or similar) instance to do this work with.  It should be based on the Amazon Linux AMI.  You should have root access to it.  Let’s call it “fix-the-disk-instance”.

  1. Stop the NEW instance
    1. Go to EC2 > Instances
    2. Select the Instance
    3. Make sure the Instance has a name that makes sense (e.g. snap.appcove.net)
    4. Click “Actions” > “Instance State” > Stop
  2. Detach the block device
    1. Go to EC2 > Volumes
    2. Find the volume by looking under Attachment Information for the instance name (e.g. snap.appcove.net)
    3. Select the volume
    4. Click “Actions” > “Detach Volume”
  3. Attach the block device to “fix-the-disk-instance”
    1. Go to EC2 > Volumes
    2. Find the volume by looking under Attachment Information for the instance name (e.g. snap.appcove.net)
    3. Select the volume
    4. Click “Actions” > “Attach Volume”
    5. Attach it to “fix-the-disk-instance” so we can fix it up
  4. Start “fix-the-disk-instance” and login
    1. Go to EC2 > Volumes
    2. Select the Instance “fix-the-disk-instance”
    3. Click “Actions” > “Instance State” > Start
    4. Wait for it to start
    5. SSH into it and escalate to root
  5. Run `parted` command and print list
    1. As root, run the following command: parted
    2. Type the following command: print all<enter>
    3. Find your disk in the output and note the path (e.g. /dev/xvdf)
    4. Quit the program by typing: quit<enter>
  6. Resize the partition:
    1. Run fdisk on the device path found above (e.g. fdisk /dev/xvdf)
    2. List the partitions
      1. Type: p<enter>
      2. Take note of the “Start” column (e.g. 2048)
    3. Delete the partition
      1. Type: d<enter>
    4. Create a new one by typing:
      1. n (for new)
      2. p (for primary)
      3. 1 (for first partition)
      4. Type first sector the same as the Start column above (e.g. 2048)
      5. Last sector should be the default (end of disk)
      6. w (for writing to disk)
  7. Now check the partition and resize it
    1. Run this command making sure to use the right device path (but add a 1 to the end for the first partition)
      1. e2fsck -f /dev/xvdf1
    2. Resize the partition making sure to use the right device path (but add a 1 to the end for the first partition)
      1. resize2fs /dev/xvdf1
  8. Shutdown the instance “fix-the-disk-instance”
    1. Go to EC2 > Instances
    2. Select the Instance “fix-the-disk-instance”
    3. Click “Actions” > “Instance State” > Stop
  9. Detach the block device and reattach it to the original server
    1. Go to EC2 > Volumes
    2. Find the volume by looking under Attachment Information for the instance name (e.g. fix-the-device-instance).
    3. Make sure it is the right volume by looking at the ID/Size.
    4. Select the volume
    5. Click “Actions” > “Detach Volume”
    6. Click “Actions” > “Attach Volume”
    7. IMPORTANT: enter “/dev/sda1” as the device so it attaches as the root volume
    8. Attach it to the original server
  10. Start “fix-the-disk-instance” and login
    1. Go to EC2 > Volumes
    2. Select the original instance (e.g. “snap.appcove.net”)
    3. Click “Actions” > “Instance State” > Start
    4. Wait for it to start

 

And that’s all!

 

 

2016-03-20 - B162957

2016-03-20 - B164242

2016-03-20 - B164338

 

 

nginx: how to specify a default server

Several years ago when I started using nginx, I was under the mistaken assumption that

server_name _;

was a wildcard server name and would be used if no other server names matched.

Nope.

I made a change on a production system, adding a new site on an existing IP address.  What harm could that cause, right?

After several clients quickly and graciously notified us that the wrong site was coming up when you visited their domain, I quickly tracked the problem down.

First you need to realize that server_name _ is actually not special.  It is just a non-match.

Second you need to realize that in the event of no matches, nginx will select the first server{} block and use that.

This means that the ORDER of your server blocks is critical if you are using `server_name _;`. 

In our case, the order was incorrect, and my new domain was picking up all requests for that IP address.  I tell this because I believe a number of system administrators have this incorrectly configured and waiting to bite them.

There is a better way.

The nginx `listen` directive includes a `default_server` option that looks like this:

server{
   listen 1.2.3.4:80 default_server;
   ...
}

From http://wiki.nginx.org/HttpCoreModule#listen

If the directive has the default_server parameter, then the enclosing server {…} block will be the default server for the address:port pair. This is useful for name-based virtual hosting where you wish to specify the default server block for hostnames that do not match any server_name directives. If there are no directives with the default_server parameter, then the default server will be the first server block in which the address:port pair appears.

The moral of the story

It is better to use the correct mechanism (above) than relying on a single non-matching server_name.

I hope someone finds this useful!

Reference: http://stackoverflow.com/questions/9454764/nginx-server-name-wildcard-or-catch-all

 

 

 

Arduino Bipolar Stepper Circuit

I got a cool little 4-wire bipolar stepper motor and wanted to drive it via Arduino.

I based the design on this reference: 

http://arduino.cc/en/Reference/StepperBipolarCircuit (credit for following image belongs there as well)

 

 

bipolar_stepper_four_pins

Here is how it looks for real.  Kind of a mess of wires, but you know what?  It works great.

IMG_2031

 

 

 

Introducing FileStruct (for Python)

FileStruct is a lightweight and fast file-cache / file-server designed for web-applications.  It solves the problems of “where do I save all of those uploads” that has been encountered time and time again.  FileStruct uses the local filesystem, but in a sensible way (keeping permissions sane), and with the ability to secure it to a reasonable level.

https://github.com/appcove/FileStruct/

Here is a simple example of taking an image upload, resizing, and saving it:

with client.TempDir() as TempDir:
   open(TempDir.FilePath('upload.jpg'), 'wb').write(mydata)
   TempDir.ResizeImage('upload.jpg', 'resize.jpg', '100x100')
   hash1 = TempDir.Save('upload.jpg')
   hash2 = TempDir.Save('resize.jpg')

Design Goals

Immutable Files

FileStruct is designed to work with files represented by the SHA-1 hash of their contents. This means that all files in FileStruct are immutable.

High Performance

FileStruct is designed as a local repository of file data accessable (read/write) by an application or web application. All operations are local I/O operations and therefore, very fast.

Where possible, streaming hash functions are used to prevent iterating over a file twice.

Direct serving from Nginx

FileStruct is designed so that Nginx can serve files directly from it’s Data directory using an X-Accel-Redirect header. For more information on this Nginx configuration directive, see http://wiki.nginx.org/XSendfile

Assuming that nginx runs under nginx user and file database is owned by the fileserver group, nginx needs to be in thefileserver group to serve files:

# usermod -a -G fileserver nginx

Secure

FileStruct is designed to be as secure as your hosting configuration. Where possible, a dedicated user should be allocated to read/write to FileStruct, and the database directory restricted to this user.

Simple

FileStruct is designed to be incredibly simple to use.

File Manipulaion

FileStruct is designed to simplify common operations on files, especially uploaded files. Image resizing for thumbnails is supported.

Temporary File Management

FileStruct is designed to simplify the use of Temp Files in an application. The API supports creation of a temporary directory, placing files in it, Ingesting files into FileStruct, and deleting the directory when completed (or retaining it in the event of an error)

Garbage Collection

FileStruct is designed to retain files until garbage collection is performed. Garbage collection consists of telling FileStruct what files you are interested in keeping, and having it move the remaining files to the trash.

Backup and Sync with Rsync

FileStruct is designed to work seamlessly with rsync for backups and restores.

Atomic operations

At the point a file is inserted or removed from FileStruct, it is a filesystem move operation. This means that under no circumstances will a file exist in FileStruct that has contents that do not match the name of the file.

No MetaData

FileStruct is not designed to store MetaData. It is designed to store file content. There may be several “files” which refer to the same content. empty.logempty.txt, and empty.ini may all refer to the empty fileData/da/39/da39a3ee5e6b4b0d3255bfef95601890afd80709. However, this file will be retained as long as any aspect of the application still uses it.

Automatic De-Duplication

Because file content is stored in files with the hash of the content, automatic file-level de-duplication occurs. When a file is pushed to FileStruct that already exists, there is no need to write it again.

This carries the distinct benifit of being able to use the same FileStruct database across multiple projects if desired, because the content of file Data/da/39/da39a3ee5e6b4b0d3255bfef95601890afd80709 is always the same, regardless of the application that placed it there.

Note: In the event that multiple instances or applications use the same database, the garbage collection routine MUST take all references to a given hash into account, across all applications that use the database. Otherwise, it would be easy to delete data that should be retained.

How to Generate a SSH Keypair (public/private) on Windows

Have you ever been asked to generate an SSH keypair in order to gain access to a server, github, or an sftp site?

Here is how on windows.

First, download puttygen.exe from here:

http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

 

Second, run puttygen.exe and follow these instructions:

(except, put your name instead of Sharon)

(On step 8, copy and paste this and send it to whomever requested it)

puttygen instructions

nginx + apache + mod_wsgi + python: how to make dynamic pages expire

When writing dynamic web applications, we use nginx as a front-end web server and apache+mod_wsgi as an application server.

It is the job of nginx to:

  1. Handle SSL, and domain-level rewriting/redirects
  2. Handle static content (.jpeg, .png, .css, .js, .txt, .ico, .pdf, etc….)
  3. Handle dynamic downloads through X-Accel-Redirect
  4. Proxy other requests to apache
  5. Set the proper cache-control and expires headers on content

Ever run into the situation where you click log out, and then click the back button, and are still able to see the pages!  That is bad.   They are dynamic pages anyway, and should not be cached.

However, images, etc… SHOULD be cached. It is important that any references to images have a way to invalidate the cache. We append a number as a query string:

/path/to/script.js?192012129

This number is updated from time to time (via Python variable) when we need to invalidate the cache.

Anyway, here are some helpful nginx configuration directives.

# Send static requests directly back to the client
location ~ \.(gif|jpg|png|ico|xml|html|css|js|txt|pdf)$
{
    root  /path/to/document/root;
    expires max;
}

# Send the rest to apache
location /
{
    add_header Cache-Control 'no-cache, no-store, max-age=0, must-revalidate';
    add_header Expires 'Thu, 01 Jan 1970 00:00:01 GMT';
    proxy_pass http://127.0.0.1:8123;
}