PostgreSQL – describe constraints with schema and table

Here is a query which will look at the pg_catalog views to extract a list of constraints along with their respective schema, table, name, and definition.

SELECT 
  pg_namespace.nspname as schema_name,
  pg_class.relname as table_name,
  pg_constraint.conname as constraint_name,
  pg_get_constraintdef(pg_constraint.oid, true) AS constraint_def, *
FROM 
  pg_catalog.pg_constraint 
  INNER JOIN pg_catalog.pg_namespace ON pg_constraint.connamespace = pg_namespace.oid 
  INNER JOIN pg_catalog.pg_class ON pg_constraint.conrelid = pg_class.oid
WHERE True
  AND pg_namespace.nspname = 'public'

Simulating ENUM in PostgreSQL using CHECK expression

PostgreSQL is a very powerful database. One of the things that seems missing when moving from MySQL is the ability to simply create an enumeration. ENUM is nice when you have a programmatically-semantic set of values for a field.

In PostgreSQL, you have several choices. But one simple one is to create a Check expression, like follows. Skip the IS NULL part if you don’t want the field nullable.

ALTER TABLE 
   "public"."CallCenter_Transfer"
ADD CONSTRAINT 
   "CallCenter_Transfer_TransferStatus"
   CHECK (
      "TransferStatus" IS NULL 
       OR 
      "TransferStatus" = ANY(ARRAY['TransferComplete', 'TransferFailedNoAnswer', 'TransferFailedProspectLost'])
      )

How to force-drop a postgresql database by killing off connection processes

Ever need to drop a postgresql database, but it would not let you because there are open connections to it (from a webapp or whatever)?

Quite annoying.  If on a production server, and other databases are being used, restarting postgresql is a last resort, because it generates downtime for your site (even if small).

I finally took the time to scratch around and find the answer.

As a super user, to list all of the open connections to a given database:

select * from pg_stat_activity where datname='YourDatabase';

As a superuser, to drop all of the open connections to a given database:

select pg_terminate_backend(procpid) from pg_stat_activity where datname=’YourDatabase’;

Or for 9.x, change `procpid` to `pid`

select pg_terminate_backend(pid) from pg_stat_activity where datname='YourDatabase';

Here are some references to the functions:

http://www.postgresql.org/docs/current/static/functions-admin.html#FUNCTIONS-ADMIN-SIGNAL-TABLE

http://www.postgresql.org/docs/8.4/static/monitoring-stats.html#MONITORING-STATS-VIEWS-TABLE

PostgreSQL Dump and Restore Notes

The pg_dump and pg_restore commands provide excellent flexibility in storing a compressed dump file, and selectively restoring any part of it.

I’ve found that dropping and re-creating the target database is the cleanest way to restore a dumpfile — no stray relations left to cause trouble.

Unless you own all of the objects being restored, you may need to be SUPERUSER in order to have a successful restore.

The custom dump format is quite useful.  Unlike the normal sequence of SQL statements you may be used to from mysqldump (and pg_dump as well), the –format=custom option will create a compressed archive file (internally a tar file) that can be selectivly read with pg_restore.  That flexibility could come in handy if you *just* need the schema from 1 table, or *just* the data from another table.

Dump:
pg_dump –format=custom -U jason_super MyDatabase > MyDatabase.pgdump

Restore
pg_restore –exit-on-error –clean –dbname=MyDatabase MyDatabase.pgdump

Get all of the SQL
pg_restore TMTManage_2.pgdump | more

Get some of the SQL
pg_restore –schema=ACRM –table=Admin TMTManage_2.pgdump | more

A brief introduction to AppStruct

Have been very busy at work lately.  We made the decision about a month ago to switch (most|all) new projects over to use Python 3 with Apache, mod_wsgi, and AppStruct.  You may know what the first 3 are, but the 4th??

Special thanks goes to Graham Dumpleton behind mod_wsgi, and James William Pye behind Python>>Postgresql.   They are not involved or affiliated with AppCove or AppStruct (aside from great mailing list support) BUT if it were not for them, this framework would not exist.

AppStruct is a component of Shank, a meta-framework.  A stand-alone component in it’s own right, it represents the AppCove approach to web-application development.  Most of it is in planning, but the parts that have materialized are really, really cool.

Briefly, I’ll cover the two emerging areas of interest:

AppStruct.WSGI

This is a very pythonic (in my opinion) web application framework targeted toward Python 3.1 (a challenge in itself at this point).  We really wanted to base new development on Python 3.1+, as well as PostgreSQL using the excellent Python3/PostgreSQL library at http://python.projects.postgresql.org/.  However, none of the popular frameworks that I am aware of support Python 3, and most (if not all) of them have a lot of baggage I do not want to bring to the party.

Werkzeug was the most promising, but alas, I could not find Python 3 support for it either.  In fact, I intend to utilize a good bit of code from Werkzeug in finishing off AppStruct.WSGI.  (Don’t you just love good OSS licences?  AppStruct will be released under one of those also).

HTTP is just not that complicated.  It dosen’t need bundled up into n layers of indecipherable framework upon framework layers.   It doesn’t need abstracted to death.  I just needs to be streamlined a bit (with regard to request routing, reading headers, etc…).

Python is an amazing OO language.  It’s object model (or data model) is one of (if not the) most well conceived of any similar language, ever.  I want to use that to our advantage…

Inheritance, including multiple inheritance, has very simple rules in Python.  Want to use that as well.

Wish to provide developers with access to the low level guts they need for 2% of the requests, but not make them do extra work for the 98% of requests.

Speed is of the essence.  Servers are not cheap, and if you can increase your throughput by 5x, then that’s a lot less servers you need to pay for.

So, how does it work?

Well, those details can wait for another post.  But at this point the library is < 1000 lines of code, and does a lot of interesting things.

  • A fully compliant WSGI application object
  • 1 to 1 request routing to Python packages/modules/classes
  • All request classes derived from AppStruct.WSGI.Request

The application maps requests like /some/path/here to objects, like Project.WSGI.some.path.here.  If there is a trailing slash, then the application assumes that the class is named Index.  The object that is found is verified to be a subclass of AppStruct.WSGI.Request, and then…

Wait!  What about security?

Yes, yes, very important.  A couple things to point out.  First, the URLs are passed through a regular expression that ensures that they adhere to only the characters that may be used in valid python identifiers (not starting with _), delimited by “/”.  Second, the import and attribute lookup verify that any object (eg class) found is a subclass of the right thing.  And so on and so forth…

But you may say “wait, what about my/fancy-shmancy/urls/that-i-am-used-to-seeing?  Ever hear of mod_rewrite?  Yep.  Not trying to re-invent the wheel.  Use apache for what it was made for, not just a dumb request handler.

What about these request objects?

They are quite straightforward.  There are several attributes which represent request data, the environment, query string variables, post variables, and more.  There is a .Response attribute which maps to a very lightweight response object.

Speaking of it — it has only 4 attributes: [Status, Header, Iterator, Length].  As you see, it’s pretty low-level-wsgi-stuff.  But the developer would rarely interact with it, other than to call a method like .Response.Redirect(‘http://somewhere-else.com&#8217;, 302)

Once the application finds the class responsible for handling the URL, it simply does this (sans exception catching code):

RequestObject = Request(...)
with RequestObject:
   RequestObject.Code()
   RequestObject.Data()
return RequestObject.Response

Wow, that’s simple.  Let me point out one more detail.  The default implementation of Data() does this:

def Data(self):
   self.Response.Iterator = [self.Text()]
   self.Response.Length = len(self.Response.Iterator[0])

So the only thing really required of this Request class is to override Text() and return some text?  Yep, that simple.

But typically, you would at some point mixin a template class that would do something like this:

class GoodLookingLayout:
   def Text(self):
      return ( 
         """<html><head>...</head><body><menu>""" +
         self.Menu() +
         """</menu><div>""" +
         self.Body() +
         """</div></body></html>"""
         )

And then it would be up to the developer to override Menu and Body (each returning the appropriate content for the page).

Ohh, you may say.  What about templating engine X?  Well, it didn’t support Python 3, and I probabally didn’t want it anyway (for 9 reasons)…  If it’s really, really good and fits this structure, drop me a line, please.

What about that Code() method?

Yeah, that’s the place that any “logic” of UI interaction should go.  I’m not advocating mixing logic and content here, but you could do that if you wanted.  You will find in our code, a seperate package for application business logic and data access that the request classes will call upon.  But again, if you are writing a one page wonder, why go to all the trouble?

The only requirement for the Code() method is that it calls super().Code() at the top.  Since the idea is that the class .Foo.Bar.Baz.Index will inherit from the class .Foo.Bar.Index, this gives you a very flexible point of creating .htaccess style initialization/access-control code in one place.  So in /Admin/Index, you could put a bit of code in Code() which ensures the user is logged in.  This code will be run by all sub-pages, therefore ensuring that access control is maintained.  Relative imports are important for this task.

from .. import Index as PARENT
from AppStruct.WSGI.Util import *
class Index(PARENT):
   def Code(self):
      super().Code()
      self.Name = self.Post.FirstName + " " + self.Post.LastName
      # some other init stuff
   def Body(self):
      return "Hello there " + HS(self.Name) + "!"

Summary of AppStruct.WSGI…

To get up and running with a web-app using this library:

  1. A couple mod_wsgi lines in httpd.conf
  2. A .wsgi file that has 1 import and 1 line of code
  3. A request class in a package that matches an expected URI (myproject.wsgi.admin.foo ==  /admin/foo)

Speed?

With no database calls, it’s pushing over 2,000 requests per second on a dev server.  With a couple PostgreSQL calls, it is pushing out ~ 800 per second.

AppStruct.Database.PostgreSQL

This is not nearly as deep as the WSGI side of AppStruct, but still really cool.  To start off, I’d like to say that James William Pye has created an amazing Postgresql connection library in Python 3.  I mean just amazing.  In fact, almost so amazing that I didn’t want to change it (but then again…)

What we did here was subclass the Connection, PreparedStatement, and Row classes.  (well, actually replaced the Row class).

Once these were subclassed, we simply added a couple useful features.  Keep in mind that all of the great functionality of the underlying library is retained.

Connection.CachePrepare(SQL)
Connection._PS_Cache

Simply a dictionary of {SQL: PreparedStatementObject} that resides on the connection object.  When you directly or indirectly invoke CachePrepare, it says “if this SQL is in the cache, return the associated object.  Otherwise, prepare it, cache it, and return it”.  This approach really simplifies the storing of prepared statement objects in a multi-threaded environment, where there is really no good place to store them (and be thread safe, connection-failure safe, etc…)

Connection.Value(SQL, *args, **kwargs)
Connection.Row(SQL, *args, **kwargs)

These simple functions take (SQL, *args, **kwargs) and return either a single value or a single row.  They will raise an exception is != 1 row is found.  They make use of CachePrepare(), so you get the performance benefits of prepared statements without the hassle.  More on *args, **kwargs later under PrePrepare()

Connection.ValueList(SQL, *args, **kwargs)
Connection.RowList(SQL, *args, **kwargs)

Same as above, except returns an iterator (or list, not sure) of  zero or more values or rows.

Row class

Simply a dictionary that also supports attribute style access of items.  After evaluating the Tuple that behaves like a mapping, I decided for our needs, a simple dict would be a better representation of a row.

Connection.PrePreprepare(SQL, args, kwargs) -> (SQL, args)

Ok, so what’s the big deal?  Well, the big deal is that we don’t like positional parameters.  They are confusing to write, read, analyze, and see what the heck is going on when you get about 30 fields in an insert statement.  Feel free to argue, but maybe I’m not as young as I once was.

We do like keyword parameters.  But postgresql prepared statement api uses numeric positional parameters.  Not changing that…

The most simple use case is to pass SQL and keyword arguments.

"SELECT a,b,c FROM table WHERE id = $id AND name = $name"
dict(id=100, name='joe')

It returns

"SELECT a,b,c FROM table WHERE id = $1 AND name = $2"
(100, "joe")

Which is suitable for passing directly into the Connection.prepare() method that came with the underlying library.

I don’t know about you, but we find this to be very, very useful.

If you pass tuples of (field, value) as positional arguments, then they will replace [Field][Value], and [Field=Value] (in the SQL) with lists of fields, lists of values (eg $1, $2), or lists of field=values (eg name=$1, age=$2).  That really takes the verbosity out of INSERT and UPDATE statements with long field lists.

Conclusion

This is just in the early stages, and has a good deal of polishing to be done (especially on the WSGI side).  My purpose here was to introduce you to what you can expect to get with AppStruct, some of the rationale behind it, and that it’s really not that hard to take the bull by the horns and make software do what you want it to (especially if you use python).

Feel free to comment.