Django community: RSS
This page, updated regularly, aggregates Community blog posts from the Django community.
-
django-meio-easytags 0.3 released!
I released today the version 0.3 of django-meio-easytags. Now it supports template tags that accepts positional arguments (*args) and keyword arguments (**kwargs). If you don’t know how to use this, take a look at Python FAQ. In this release I included some documentation and created a page for the project. Any doubts, suggestions, feature requests, [...] -
Release early \\ (Release often?)
Righty, this feels all sorta official. So, the 0.1 release of Django REST framework went up to PyPI yesterday and I made the announce on the django-users mailing list today. So that’s, that then. I’d really appreciate any constructive criticism of … Continue reading → -
Release early \\ (Release often?)
Righty, this feels all sorta official. So, the 0.1 release of Django REST framework went up to PyPI yesterday and I made the announce on the django-users mailing list today. So that’s, that then. I’d really appreciate any constructive criticism of … Continue reading → -
Optimization of getting random rows out of a PostgreSQL in Django
There was a really interesting discussion on the django-users mailing list about how to best select random elements out of a SQL database the most efficient way. I knew using a regular RANDOM() in SQL can be very slow on big tables but I didn't know by how much. Had to run a quick test! Cal Leeming discussed a snippet of his to do with pagination huge tables which uses the MAX(id) aggregate function. So, I did a little experiment on a table with 84,000 rows in it. Realistic enough to matter even though it's less than millions. So, how long would it take to select 10 random items, 10 times? Benchmark code looks like this: TIMES = 10 def using_normal_random(model): for i in range(TIMES): yield model.objects.all().order_by('?')[0].pk t0 = time() for i in range(TIMES): list(using_normal_random(SomeLargishModel)) t1 = time() print t1-t0, "seconds" Result: 41.8955321312 seconds Nasty!! Also running this you'll notice postgres spiking your CPU like crazy. A much better approach is to use Python's random.randint(1, <max ID>). Looks like this: from django.db.models import Max from random import randint def using_max(model): max_ = model.objects.aggregate(Max('id'))['id__max'] i = 0 while i < TIMES: try: yield model.objects.get(pk=randint(1, max_)).pk i += 1 except model.DoesNotExist: pass t0 = time() for i in range(TIMES): list(using_max(SomeLargishModel)) t1 = time() print t1-t0, "seconds" Result: 0.63835811615 seconds Much more pleasant! UPDATE Commentator, Ken Swift, asked what if your requirement is to select 100 random items instead of just 10. Won't those 101 database queries be more costly than … -
Optimization of getting random rows out of a PostgreSQL in Django
There was a really interesting discussion on the django-users mailing list about how to best select random elements out of a SQL database the most efficient way. I knew using a regular RANDOM() in SQL can be very slow on big tables but I didn't know by how much. Had to run a quick test! Cal Leeming discussed a snippet of his to do with pagination huge tables which uses the MAX(id) aggregate function. So, I did a little experiment on a table with 84,000 rows in it. Realistic enough to matter even though it's less than millions. So, how long would it take to select 10 random items, 10 times? Benchmark code looks like this: TIMES = 10 def using_normal_random(model): for i in range(TIMES): yield model.objects.all().order_by('?')[0].pk t0 = time() for i in range(TIMES): list(using_normal_random(SomeLargishModel)) t1 = time() print t1-t0, "seconds" Result: 41.8955321312 seconds Nasty!! Also running this you'll notice postgres spiking your CPU like crazy. A much better approach is to use Python's random.randint(1, <max ID>). Looks like this: from django.db.models import Max from random import randint def using_max(model): max_ = model.objects.aggregate(Max('id'))['id__max'] i = 0 while i < TIMES: try: yield model.objects.get(pk=randint(1, max_)).pk i += 1 except model.DoesNotExist: pass t0 = time() for i in range(TIMES): list(using_max(SomeLargishModel)) t1 = time() print t1-t0, "seconds" Result: 0.63835811615 seconds Much more pleasant! UPDATE Commentator, Ken Swift, asked what if your requirement is to select 100 random items instead of just 10. Won't those 101 database queries be more costly than … -
Optimization of getting random rows out of a PostgreSQL in Django
There was a really interesting discussion on the django-users mailing list about how to best select random elements out of a SQL database the most efficient way. I knew using a regular RANDOM() in SQL can be very slow on big tables but I didn't know by how much. Had to run a quick test! Cal Leeming discussed a snippet of his to do with pagination huge tables which uses the MAX(id) aggregate function. So, I did a little experiment on a table with 84,000 rows in it. Realistic enough to matter even though it's less than millions. So, how long would it take to select 10 random items, 10 times? Benchmark code looks like this: TIMES = 10 def using_normal_random(model): for i in range(TIMES): yield model.objects.all().order_by('?')[0].pk t0 = time() for i in range(TIMES): list(using_normal_random(SomeLargishModel)) t1 = time() print t1-t0, "seconds" Result: 41.8955321312 seconds Nasty!! Also running this you'll notice postgres spiking your CPU like crazy. A much better approach is to use Python's random.randint(1, <max ID>). Looks like this: from django.db.models import Max from random import randint def using_max(model): max_ = model.objects.aggregate(Max('id'))['id__max'] i = 0 while i < TIMES: try: yield model.objects.get(pk=randint(1, max_)).pk i += 1 except model.DoesNotExist: pass t0 = time() for i in range(TIMES): list(using_max(SomeLargishModel)) t1 = time() print t1-t0, "seconds" Result: 0.63835811615 seconds Much more pleasant! UPDATE Commentator, Ken Swift, asked what if your requirement is to select 100 random items instead of just 10. Won't those 101 database queries be more costly than … -
Optimization of getting random rows out of a PostgreSQL in Django
There was a really interesting discussion on the django-users mailing list about how to best select random elements out of a SQL database the most efficient way. I knew using a regular RANDOM() in SQL can be very slow on big tables but I didn't know by how much. Had to run a quick test! Cal Leeming discussed a snippet of his to do with pagination huge tables which uses the MAX(id) aggregate function. So, I did a little experiment on a table with 84,000 rows in it. Realistic enough to matter even though it's less than millions. So, how long would it take to select 10 random items, 10 times? Benchmark code looks like this: TIMES = 10 def using_normal_random(model): for i in range(TIMES): yield model.objects.all().order_by('?')[0].pk t0 = time() for i in range(TIMES): list(using_normal_random(SomeLargishModel)) t1 = time() print t1-t0, "seconds" Result: 41.8955321312 seconds Nasty!! Also running this you'll notice postgres spiking your CPU like crazy. A much better approach is to use Python's random.randint(1, <max ID>). Looks like this: from django.db.models import Max from random import randint def using_max(model): max_ = model.objects.aggregate(Max('id'))['id__max'] i = 0 while i < TIMES: try: yield model.objects.get(pk=randint(1, max_)).pk i += 1 except model.DoesNotExist: pass t0 = time() for i in range(TIMES): list(using_max(SomeLargishModel)) t1 = time() print t1-t0, "seconds" Result: 0.63835811615 seconds Much more pleasant! UPDATE Commentator, Ken Swift, asked what if your requirement is to select 100 random items instead of just 10. Won't those 101 database queries be more costly than … -
Entrevista a Daniel Rus de Witmeet
Comenzamos una nueva categoría de artículos en los que iremos descubriendo proyectos creados con Django por la comunidad hispanohablante. Para estrenarla tenemos la suerte de entrevistar a Daniel Rus, creador de Witmeet. Witmeet es un sitio web que permite buscar personas con las que practicar idiomas cara a cara en cualquier parte del mundo. Una vez te das de alta puedes configurar tu perfil con los idiomas que hablas y los que quieres practicar. Puedes crear tus propios witmeets eligiendo tu disponibilidad y los sitios (cafeterías u otros lugares) donde te apetezca quedar para practicar idiomas, o buscar witmeets cercanos creados por otros usuarios. -
Antonkovalyov Jshint Edition Update 2011 03 01
Today we released 2011-03-01 edition of JSHint, a community-driven code quality tool. This is the first release since the announcement and so far the community feedback has been nothing but helpful. I would like to personally thank the following contributors for providing patches that were includ... -
Redis at Disqus
-
django-meio-easytags released!
django-meio-easytags An easy way to create and parse custom template tags for Django's templating system. -
django-meio-easytags released!
django-meio-easytags An easy way to create and parse custom template tags for Django's templating system. -
Nice testimonial about django-static
My friend Chris is a Django newbie who has managed to build a whole e-shop site in Django. It will launch on a couple of days and when it launches I will blog about it here too. He sent me this today which gave me a smile: "I spent today setting up django_static for the site, and optimising it for performance. If there's one thing I've learned from you, it's optimisation. So, my homepage is now under 100KB (was 330KB), and it loads in @5-6 seconds from hard refresh (was 13-14 seconds at its worst). And I just got a 92 score on Yslow. I do believe I have the fastest tea website around now, and I still haven't installed caching. Wicked huh?" He's talking about using django-static. Then I get another email shortly after with this: "correction - I get 97 on YSlow if I use a VPN. I just found that the Great Firewall tags extra HTTP requests onto every request I make from my browser, pinging a server in Shanghai with a PHP script which probably checks the page for its content or if its on some kind of blocked list. Cheeky buggers!" It's that interesting! (Note: … -
Nice testimonial about django-static
My friend Chris is a Django newbie who has managed to build a whole e-shop site in Django. It will launch on a couple of days and when it launches I will blog about it here too. He sent me this today which gave me a smile: "I spent today setting up django_static for the site, and optimising it for performance. If there's one thing I've learned from you, it's optimisation. So, my homepage is now under 100KB (was 330KB), and it loads in @5-6 seconds from hard refresh (was 13-14 seconds at its worst). And I just got a 92 score on Yslow. I do believe I have the fastest tea website around now, and I still haven't installed caching. Wicked huh?" He's talking about using django-static. Then I get another email shortly after with this: "correction - I get 97 on YSlow if I use a VPN. I just found that the Great Firewall tags extra HTTP requests onto every request I make from my browser, pinging a server in Shanghai with a PHP script which probably checks the page for its content or if its on some kind of blocked list. Cheeky buggers!" It's that interesting! (Note: … -
Nice testimonial about django-static
My friend Chris is a Django newbie who has managed to build a whole e-shop site in Django. It will launch on a couple of days and when it launches I will blog about it here too. He sent me this today which gave me a smile: "I spent today setting up django_static for the site, and optimising it for performance. If there's one thing I've learned from you, it's optimisation. So, my homepage is now under 100KB (was 330KB), and it loads in @5-6 seconds from hard refresh (was 13-14 seconds at its worst). And I just got a 92 score on Yslow. I do believe I have the fastest tea website around now, and I still haven't installed caching. Wicked huh?" He's talking about using django-static. Then I get another email shortly after with this: "correction - I get 97 on YSlow if I use a VPN. I just found that the Great Firewall tags extra HTTP requests onto every request I make from my browser, pinging a server in Shanghai with a PHP script which probably checks the page for its content or if its on some kind of blocked list. Cheeky buggers!" It's that interesting! (Note: … -
Nice testimonial about django-static
My friend Chris is a Django newbie who has managed to build a whole e-shop site in Django. It will launch on a couple of days and when it launches I will blog about it here too. He sent me this today which gave me a smile: "I spent today setting up django_static for the site, and optimising it for performance. If there's one thing I've learned from you, it's optimisation. So, my homepage is now under 100KB (was 330KB), and it loads in @5-6 seconds from hard refresh (was 13-14 seconds at its worst). And I just got a 92 score on Yslow. I do believe I have the fastest tea website around now, and I still haven't installed caching. Wicked huh?" He's talking about using django-static. Then I get another email shortly after with this: "correction - I get 97 on YSlow if I use a VPN. I just found that the Great Firewall tags extra HTTP requests onto every request I make from my browser, pinging a server in Shanghai with a PHP script which probably checks the page for its content or if its on some kind of blocked list. Cheeky buggers!" It's that interesting! (Note: … -
Why I forked JSLint to JSHint
-
Why I forked JSLint to JSHint
Your sadly pathetic bleatings are harshing my mellow. —Douglas Crockford. This Friday we released JSHint, a code quality tool designed to detect errors and potential problems in JavaScript code and to enforce your team’s coding conventions.1 JSHint is a fork of JSLint, the tool wri... -
Django 1.3 patterns for older django versions — bruno.im
-
Django 1.3 patterns for older django versions
Django 1.3 is approaching fast, with tons of bugfixes but also some very interesting features. Among those, two are particularly interesting as I think they improve a lot the way we write our Django apps: class-based generic views and the new contrib.staticfiles app. Class-based views The benefits of class-based generic views are quite obvious: instead of wrapping an existing function-based generic view and/or passing it a countless number of argument, functionality can be easily extended using inheritance. Although the learning curve is steeper, it's totally worth learning. But see the docs for that, it's not the point of this post. Let's say you've got this big project that runs on Django 1.2. You're in the process of upgrading it to 1.3 but you're not ready yet, and you want to use class-based views for new features? Meet django-cbv. Django-cbv is a backport of the class-based views from django trunk that you can use with older django versions. As of writing this, you need to install it from my github fork since the package on pypi is missing an import (Update: fixed in django-cbv 0.1.5): pip install django-cbv Once installed, you just need to add to your MIDDLEWARE_CLASSES: 'cbv.middleware.DeferredRenderingMiddleware' And then … -
Dutch python meeting summary (from 16 Februari 2011)
Last Wednesday we had a fun PUN meeting in our office in Utrecht at Nelen & Schuurmans. A nice turnout with 30 people all in all. Jasper Spaans videotaped everything, so that'll turn up on blip.tv soon. My brother Maurits also made a summary. He missed the first talk (which I have) but he has a summary of my own talk (which I haven't). Trac - Christo Butcher Trac is for developers, for making developers' lives easier. About half of the participants uses trac. A core part of trac is its wiki. Interesting since version 0.12: a side-by-side editor: the wiki text format on the left and a live preview on the right. The second core part is the issue tracker. Nothing much special compared to other issue trackers, but pretty OK. The tracker is good both for bug tracking and for task/feature management. Third: source browsing. For subversion, mercurial and git (the latter two by means of plugins). The main reason Fox IT (where he works) started with Trac is this source browser. Not the tracker or the wiki, but the source browser. Handy: the timeline, for browsing all the changes (code, issues, wiki) in the system. Handy to … -
Dutch python meeting summary (from 16 Februari 2011)
Last Wednesday we had a fun PUN meeting in our office in Utrecht at Nelen & Schuurmans. A nice turnout with 30 people all in all. Jasper Spaans videotaped everything, so that'll turn up on blip.tv soon. My brother Maurits also made a summary. He missed the first talk (which I have) but he has a summary of my own talk (which I haven't). Trac - Christo Butcher Trac is for developers, for making developers' lives easier. About half of the participants uses trac. A core part of trac is its wiki. Interesting since version 0.12: a side-by-side editor: the wiki text format on the left and a live preview on the right. The second core part is the issue tracker. Nothing much special compared to other issue trackers, but pretty OK. The tracker is good both for bug tracking and for task/feature management. Third: source browsing. For subversion, mercurial and git (the latter two by means of plugins). The main reason Fox IT (where he works) started with Trac is this source browser. Not the tracker or the wiki, but the source browser. Handy: the timeline, for browsing all the changes (code, issues, wiki) in the system. Handy to … -
Connecting anything to anything with Django
I'm writing this post to introduce a new project I've released, django-generic-m2m, which as its name would indicate is a generic ManyToMany implementation for django models. The goal of this project was to provide a uniform API for both creating and querying generically-related content in a flexible manner. One use-case for this project would be creating semantic "tags" between diverse objects in the database. -
Connecting anything to anything with Django
Edit 7/11/2011 I've added documentation and an example app. Introduction I'm writing this post to introduce a new project I've released, django-generic-m2m, which as its name would indicate is a generic ManyToMany implementation for django models. The goal of this project was to provide a uniform API for both creating and querying generically-related content in a flexible manner. One use-case for this project would be creating semantic "tags" between diverse objects in the database. Connecting Models What its all about is connecting models together and, if you want, creating some metadata about the meaning of that relationship (i.e. a tag). To this end, django-generic-m2m does three things to make this behavior easier: wraps up all querying and connecting logic in a single attribute that acts on both model instances and the model class allows any model to be used as the intermediary "through" model provides an optimized lookup when GenericForeignKeys are used An example Referring back to the diagram, let's create some models (these are the same models used in the testcases): from django.db import models from genericm2m.models import RelatedObjectsDescriptor class Food(models.Model): name = models.CharField(max_length=255) related = RelatedObjectsDescriptor() def __unicode__(self): return self.name class Beverage(models.Model): # ... same as above class … -
Connecting anything to anything with Django
Edit 7/11/2011 I've added documentation and an example app. Introduction I'm writing this post to introduce a new project I've released, django-generic-m2m, which as its name would indicate is a generic ManyToMany implementation for django models. The goal of this project was to provide a uniform API for both creating and querying generically-related content in a flexible manner. One use-case for this project would be creating semantic "tags" between diverse objects in the database. Connecting Models What its all about is connecting models together and, if you want, creating some metadata about the meaning of that relationship (i.e. a tag). To this end, django-generic-m2m does three things to make this behavior easier: wraps up all querying and connecting logic in a single attribute that acts on both model instances and the model class allows any model to be used as the intermediary "through" model provides an optimized lookup when GenericForeignKeys are used An example Referring back to the diagram, let's create some models (these are the same models used in the testcases): from django.db import models from genericm2m.models import RelatedObjectsDescriptor class Food(models.Model): name = models.CharField(max_length=255) related = RelatedObjectsDescriptor() def __unicode__(self): return self.name class Beverage(models.Model): # ... same as above class …