Django community: RSS
This page, updated regularly, aggregates Community blog posts from the Django community.
-
Call for contributors – Stream-Framework 1.1
Reblog, from: Call for contributors – Stream-Framework 1.1 Share and Enjoy: -
Call for contributors - Stream-Framework 1.1
Today we’ve released Stream Framework 1.1. Development has been a bit slow pending the rename from Feedly to Stream-Framework. Fortunately the community is growing faster than ever. Many users of GetStream.io are looking at Stream Framework and vice versa. We would like to take this moment to encourage contributions. We’re actively accepting pull requests and appreciate all the help. Especially the python 3 support on the roadmap is something which you can easily get started with. So if you’ve been searching for a project to contribute to, now is a good time :) Roadmap Python 3 support (pending CQLEngine) Documentation improvements (see issues) Database backend to facilitate small projects Relevancy based feeds (far future) API similar to getstream.io to allow other languages to Stream Framework (far future) Let us know which features you guys are looking forward to! What’s new Since the last blogpost we’ve made the following changes: Ability to customize the Activity object used in feeds and aggregated activity used in aggregated feeds. Hooks to collect metrics Updated CQLengine and Python-Driver Ability to filter Redis & Cassandra feeds using the .filter syntax() Ability to order Redis using the .order syntax (Thanks Anislav) Redis improvements to counts (Again, many … -
We’re a Gold Sponsor of Multiple Template Engines for Django
One of the most common components in Django that is replaced by alternatives is templates. The reason is either or both performance and design related. While there are numerous template bridge libraries to support the use of Jinja2, Mako, and other template languages in Django, there is not a standard API. This is unfortunate, because one of the virtues of using Django is that we can share our efforts across third-party libraries. Fortunately, Django core developer, Aymeric Augustin has started a crowdfunding campaign to create first-class support for third-party template engines in Django. We’re excited! Two Scoops Press is proud to be a gold sponsor for Django core developer Aymeric Augustin’s Multiple Template Engines for Django campaign on Indiegogo. This is the last day of the campaign, so now’s your chance to show support as well. -
We’re a Gold Sponsor of Multiple Template Engines for Django
One of the most common components in Django that is replaced by alternatives is templates. The reason is either or both performance and design related. While there are numerous template bridge libraries to support the use of Jinja2, Mako, and other template languages in Django, there is not a standard API. This is unfortunate, because one of the virtues of using Django is that we can share our efforts across third-party libraries. Fortunately, Django core developer, Aymeric Augustin has started a crowdfunding campaign to create first-class support for third-party template engines in Django. We’re excited! Two Scoops Press is proud to be a gold sponsor for Django core developer Aymeric Augustin’s Multiple Template Engines for Django campaign on Indiegogo. This is the last day of the campaign, so now’s your chance to show support as well. -
We’re a Gold Sponsor of Multiple Template Engines for Django
One of the most common components in Django that is replaced by alternatives is templates. The reason is either or both performance and design related. While there are numerous template bridge libraries to support the use of Jinja2, Mako, and other template languages in Django, there is not a standard API. This is unfortunate, because one of the virtues of using Django is that we can share our efforts across third-party libraries. Fortunately, Django core developer, Aymeric Augustin has started a crowdfunding campaign to create first-class support for third-party template engines in Django. We’re excited! Two Scoops Press is proud to be a gold sponsor for Django core developer Aymeric Augustin’s Multiple Template Engines for Django campaign on Indiegogo. This is the last day of the campaign, so now’s your chance to show support as well. -
Introducing stream-django, our first framework integration
We are proud to announce the first release of Stream-Django, a python package to add activity feeds to Django applications. This package comes with built-in integration for Django models, out-of-the-box feed management and some template helpers to presenting feeds to your users. The package is available on Github, you can find a detailed explanation here. We also updated our example application to use stream-django. Tommaso -
Django runserver_plus with SSL and Firefox 33
So with version 33, Firefox did something rather annoying, they now use a more restrictive library that rejects connections to servers running older versions of SSL. On the one hand, this is pretty awesome because at some point we all need to grow up and start using modern encryption, but on the other, it can make development really difficult when all you really need a an SSL setup -- any SSL setup to make your local development environment Just Work. We've been using django-extenstion's runserver_plus feature, which is awesome because it includes a browser-based debugger and other really cool stuff, but also importantly, it supports the ability for you to run the Django runserver in SSL mode. This means that you can do stuff like: ./manage.py runserver_plus --cert=/tmp/temporary.cert And that's enough for you to be able to access your site over SSL: https://localhost:8000/ However, now that Firefox has thrown this monkeywrench into things, we spent far too much time today trying to figure out what was wrong and how to fix it, so I'm posting the answer here: Basically, you just need a better cert than the one django-extensions creates for you automatically. So, instead of just running --cert=/path/to/file and … -
Reading and writing binary STL files with Numpy
After seeing Sukhbinder’s implementation of reading STL files with Numpy I thought it would be a nice thing to have a simple STL class to both read and write the binary files. import struct import numpy class Stl(object): dtype = numpy.dtype([ ('normals', numpy.float32, (3, )), ('v0', numpy.float32, (3, )), ('v1', numpy.float32, (3, )), ('v2', numpy.float32, (3, )), ('attr', 'u2', (1, )), ]) def __init__(self, header, data): self.header = header self.data = data @classmethod def from_file(cls, filename, mode='rb'): with open(filename, mode) as fh: header = fh.read(80) size, = struct.unpack('@i', fh.read(4)) data = numpy.fromfile(fh, dtype=cls.dtype, count=size) return Stl(header, data) def to_file(self, filename, mode='wb'): with open(filename, mode) as fh: fh.write(self.header) fh.write(struct.pack('@i', self.data.size)) self.data.tofile(fh) if __name__ == '__main__': # Read from STL file stl = Stl.from_file('test.stl') # Increment the X axis by one stl.data['v0'][:, 0] += 1 stl.data['v1'][:, 0] += 1 stl.data['v2'][:, 0] += 1 # Write back to file stl.to_file('test.stl') Link to this post! -
Erste Hamburger Python Unconference
Am 29. und 30. November 2014 veranstalten die Python Usergroup Hamburg und der Fachschaftsrat Informatik der Uni Hamburg zusammen mit dem boot e.V. [1] die 1. Python Unconference Hamburg. Eingeladen sind alle Python-Entwickler aus Wissenschaft und Technik, Finance, BigData, Machine Learning und Webentwicklung vom Anfänger bis zum alten Hasen! Im Barcamp-Stil gibt es Vorträge und Diskussionen aus allen Bereichen der Python Welt. In den Pausen werden nicht nur Speisen und Getränke sondern auch viele Gelegenheiten zum direkten persönlichen Austausch geboten. Bis zum 15.10. gibt es noch Early Bird Tickets für 35,- Euro. Mehr Informationen gibt es unter www.pyunconf.de. [1]Der Verein BOOT - Best Of Open Technologies e.V. ist Träger der PHP Unconference Hamburg und der PHP Unconference Europe sowie auch der JS Unconference Hamburg und der Python Unconference Hamburg. Erfahrung in der Organisation solcher Veranstaltungen ist also vorhanden, auch wenn es sich um die erste Python Unconference handelt. -
Django中如何使用django-celery完成异步任务 (2)
在上一篇博文中, 我们介绍了如何在开发环境中使用Celery. 接下来我们介绍一下如何在部署环境使用Celery. 1. 简单设置 一个简单的Celery堆有一个queue和一个worker进程组成. 使用以下命令启动worker: python manage.py celery worker -B 以上命令是基于django-celery, 当然你也可以celery自身启动worker. 通常我们使用supervisord管理celery worker的启动和重启, 而不是使用手动的方式. supervisord的介绍我们会在今后的文章中作详细介绍. 现在我们只需要知道它是一款进程管理程序即可. 当然, 你也可以选择类似的系统, 例如init.d, upstart, runit或god等. "-B"参数告诉celery在启动worker时同时启动celery beat, 并使用统一进程, 以便执行定期任务. 在部署服务器上, 我们使用Redis或RabbitMQ作为broker. 而在这一简单的celery堆中, 我们用django数据库储存执行结果, 或干脆忽略结果都可. 2. 完整设置 如果简单设置无法满足我们的需要的话, 我们只需要做一些简单的改变就能完整设置Celery异步任务. 完整设置中, 我们使用多个queue来区分任务优先级. 每个queue我们配置一个不同concurrency设置的worker. beat进程也与worker进程分离出来. # 默认 queue python manage.py celery worker -Q celery # 高优先级 queue. 10个 workers python manage.py celery worker -Q high -c 10 # 低优先级 queue. 2个 workers python manage.py celery worker -Q low -c 2 # Beat 进程 python manage.py celery beat 注意, 其中high和low只是queue的名字, 并没有其他特殊意义. 我们通过为高优先级的queue配置高concurrency的worker, 使高优先级queue能够使用更多的资源. 同样的, 这些启动命令通过supervisor管理. 至于broker, 我们还是使用Redis或RabbitMQ. 而任务结果则可以储存在Redis或Memcached这些拥有高写入速度的系统中. 如果有必要, 这些worker进程可以移到其他服务器中, 但最好共享一个broker和结果储存系统. 3. 扩展性 我们不能一味的依靠增加额外的worker来提高性能, 因为每个worker都会占用一定的资源. 默认的concurrency设置是, 都多少CPU便创建多少worker, 并为每个worker创建一个新的进程. 将concurrency设置的太高则会很快的榨干服务器的CPU和内存资源. 对于I/O资源需求较大的任务, 我们则可以指定worker使用gevent或eventlet池, 而不是使用更多进程. 这一配置使用的内存资源会大大降低, 同时提升concurrency的性能. 需要注意的是, 但如果我们涉及到的library没有为greenlet打过补丁的话, 很有可能会阻塞所有的任务! 4. 注意 还有需要注意的是django的transaction. transaction根据django的版本和是否已web request形式传入有所不同, 所以你需要自己查阅相关的文档. -
Django中如何使用Celery完成异步任务 (1)
本篇博文主要介绍在开发环境中的celery使用,请勿用于部署服务器. 许多Django应用需要执行异步任务, 以便不耽误http request的执行. 我们也可以选择许多方法来完成异步任务, 使用Celery是一个比较好的选择, 因为Celery有着大量的社区支持, 能够完美的扩展, 和Django结合的也很好. Celery不仅能在Django中使用, 还能在其他地方被大量的使用. 因此一旦学会使用Celery, 我们可以很方便的在其他项目中使用它. 1. Celery版本 本篇博文主要针对Celery 3.0.x. 早期版本的Celery可能有细微的差别. 2. Celery介绍 Celery的主要用处是执行异步任务, 可以选择延期或定时执行功能. 为什么需要执行异步任务呢? 第一, 假设用户正发起一个request, 并等待request完成后返回. 在这一request后面的view功能中, 我们可能需要执行一段花费很长时间的程序任务, 这一时间可能远远大于用户能忍受的范围. 当这一任务并不需要立刻执行时, 我们便可以使用Celery在后台执行, 而不影响用户浏览网页. 当有任务需要访问远程服务器完成时, 我们往往都无法确定需要花费的时间. 第二则是定期执行某些任务. 比如每小时需要检查一下天气预报, 然后将数据储存到数据库中. 我们可以编写这一任务, 然后让Celery每小时执行一次. 这样我们的web应用便能获取最新的天气预报信息. 我们这里所讲的任务task, 就是一个Python功能(function). 定期执行一个任务可以被认为是延时执行该功能. 我们可以使用Celery延迟5分钟调用function task1, 并传入参数(1, 2, 3). 或者我们也可以每天午夜运行该function. 我们偏向于将Celery放入项目中, 便于task访问统一数据库和Django设置. 当task准备运行时, Celery会将其放入列队queue中. queue中储存着可以运行的task的list. 我们可以使用多个queue, 但为了简单, 这里我们只使用一个. 将任务task放入queue就像加入todo list一样. 为了使task运行, 我们还需要在其他线程中运行的苦工worker. worker实时观察着代运行的task, 并逐一运行这些task. 你可以使用多个worker, 通常他们位于不同服务器上. 同样为了简单起见, 我们这只是用一个worker. 我们稍后会讨论queue, worker和另外一个十分重要的进程, 接下来我们来动动手: 3. 安装Celery 我们可以使用pip在vietualenv中安装: pip install django-celery 4. Django设置 我们暂时使用django runserver来启动celery. 而Celery代理人(broker), 我们使用Django database broker implementation. 现在我们只需要知道Celery需要broker, 使用django自身便可以充当broker. (但在部署时, 我们最好使用更稳定和高效的broker, 例如Redis.) 在settings.py中: import djcelery djcelery.setup_loader() BROKER_URL = 'django://' ... INSTALLED_APPS = ( ... 'djcelery', 'kombu.transport.django', ... ) 第一二项是必须的, 第三项则告诉Celery使用Django项目作为broker. 在INSTALLED_APPS中添加的djcelery是必须的. kombu.transport.django则是基于Django的broker 最后创建Celery所需的数据表, 如果使用South作为数据迁移工具, 则运行: python manage.py migrate 否则运行: (Django 1.6或Django 1.7都可以) python manage.py syncdb 5. 创建一个task 正如前面所说的, 一个task就是一个Pyhton function. 但Celery需要知道这一function是task, 因此我们可以使用celery自带的装饰器decorator: @task. 在django app目录中创建taske.py: from celery import task @task() def add(x, y): return x + y 当settings.py中的djcelery.setup_loader()运行时, Celery便会查看所有INSTALLED_APPS中app目录中的tasks.py文件, 找到标记为task的function, 并将它们注册为celery task. 将function标注为task并不会妨碍他们的正常执行. 你还是可以像平时那样调用它: z = add(1, 2). 6. 执行task 让我们以一个简单的例子作为开始. 例如我们希望在用户发出request后异步执行该task, 马上返回response, 从而不阻塞该request, 使用户有一个流畅的访问过程. 那么, 我们可以使用.delay, 例如在在views.py的一个view中: from myapp.tasks import add ... add.delay(2, 2) ... Celery会将task加入到queue中, 并马上返回. 而在一旁待命的worker看到该task后, 便会按照设定执行它, 并将他从queue中移除. 而worker则会执行以下代码: import myapp.tasks.add myapp.tasks.add(2, 2) 7. 关于import 这里需要注意的是, 在impprt task时, 需要保持一致. 因为在执行djcelery.setup_loader()时, task是以INSTALLED_APPS中的app名, 加.tasks.function_name注册的, 如果我们由于python path不同而使用不同的引用方式时(例如在tasks.py中使用from myproject.myapp.tasks import add形式), Celery将无法得知这是同一task, 因此可能会引起奇怪的bug. … -
What's new in Django 1.7
A few weeks ago I gave a talk at AirConf 2014, a virtual conference organised by my friends at AirPair, about what's new in Django 1.7. Here's the video: I kept the slides simple, on purpose, as most of the interesting stuff was in the code demos. You can find them here anyway. A quick aside about speaking at this sort of virtual event. There's a very noticeable lack of feedback, since you're talking to yourself rather than a room full of presumably-interested people. It makes it very difficult to judge whether the talk is going well, or even if anyone is actually listening - compare to a normal conference, where you know that if everyone is immersed in their laptops rather than looking at you, the talk isn't catching their imaginations. Still, I think the talk was well received, and I enjoyed giving it. -
Related ManyToManyField in Django admin site - continued
A few weeks ago, I posted an article about displaying related ManyToManyFields in the Django admin site. The trick presented in that article works and was tested in Django 1.5.8 and 1.6.5 (the latest releases at the time). In the mean time, as you probably know, Django 1.7 was released as well as a security update. Using any one of those 3 releases, the trick presented in that article doesn't work as-is anymore. It requires a bit more code. In this article, I'll give out that code and explain why it is now needed. -
Slowly moving through town
After getting questions about it I recently added the Slow Exit contribution to the main repository as an example. Delayed movement is something often seen in various text games, it simply means that the time to move from room to room is artificially extended.Evennia's default model uses traditional MU* rooms. These are simple nodes with exits linking them together. Such Rooms have no internal size and no inherent spatial relationship to each other. Moving from any Room to any other is happening as fast as the system can process the movement.Introducing a delay on exit traversal can have a surprisingly big effect on a game: It dramatically changes the "feel" of the game. It often makes the game feel less "twitch" and slows things down in a very real way. It lets Players consider movement as a "cost".It simulates movement speed. A "quick" (or maybe well-rested) character might perceive an actual difference in traversal. The traversal speed can vary depending on if the Character is "running" or "walking".It can emulate travel distance. An Exit leading to "the top of the mountain" may take longer to traverse than going "inside the tent".It makes movement a "cost" to take into consideration in the … -
Python/Django example app on Heroku is live!
Hello there, We are excited to announce the first of our GetStream example applications. We hope that with this series of examples people will see how easy it is to get started with Stream and how an application can implement news feeds in less than one day of work. Our first example is written in Python and uses the Django web framework (and of course Stream’s python client). It is hosted on Github and can be found here. To make things even easier we have integrated Stream with Heroku and, thanks to the awesome deploy button, deploying the demo application only takes 3 clicks and less than 2 minutes! It’s free so go ahead and start the Deploy to Heroku while you continue reading. The example appThe example app we have built is a very simple clone of Pinterest; it ships with 3 different kinds of feeds: user feed, which shows current user's pins flat feed, which shows the pins from the users you follow aggregated feed, which shows the pins from the users you follow and aggregates them Now, since you are probably going to be the only user (at least, when you try it out) we automatically log … -
Django: From Runserver to Reddit Hugs
Last month, I presented High Performance Django: From Runserver to Reddit Hugs at DjangoCon US in Portland. My assertion was that Django, left to its own devices, does not scale. With the right supporting servers, however, it can scale fantastically. I gave a live demo of a Django site in multiple different server configurations with Docker on EC2, showing how each one affected performance. For those who missed the conference, here's the video: -
Django: From Runserver to Reddit Hugs
Last month, I presented High Performance Django: From Runserver to Reddit Hugs at DjangoCon US in Portland. My assertion was that Django, left to its own devices, does not scale. With the right supporting servers, however, it can scale fantastically. I gave a live demo of a Django site in multiple different server configurations with Docker on EC2, showing how each one affected performance. For those who missed the conference, here's the video: -
Read-only data from your database in Django
I had the need to create a column in some database tables that are completely controlled by the database, but the value of which is _sometimes_ needed by the [Django](https://www.djangoproject.com/) object. It should never be presented in a Form, and never, ever be written to by the Django infrastructure. So, we need a way to fetch the data from the database, but, even if the value is changed, and the object saved, is not written back. The detail of how this data is set in the database is irrelevant: it's a column that gets it's value from a sequence (and, incidentally, this sequence is shared across multiple tables). But, we need a way to get this data. A nice technique is to leverage two parts of Django: the `QuerySet.extra(select={})` method to actually add this field to the query, and `Manager.get_query_set()` (`get_queryset()` in older versions of Django) to make this apply to every fetch of the objects. Our extra column will be called `sequence_number` {% highlight python %} class SequenceNumberManager(models.managers.Manager): def get_query_set(self): return super(SequenceNumberManager, self).get_query_set().extra(select={ 'sequence_number': '"%s"."sequence_number"' % self.model._meta.db_table }) class Thing(models.Model): # Column definitions. Do not define sequence_number! objects = SequenceNumberManager() {% endhighlight %} That's it. Now, `Thing.objects.all()[0].sequence_number` will give … -
The Class Based "View"
People often find working with class based views hard, but they are simple... Once you spend time figuring them out. In this video start with the base of building blocks and work your way through completly understanding the base View of (generic) class based views.Watch Now... -
New Django Server Setup: Part 2
New Django Server Setup: Part 2 -
Celery in Production
(Thanks to Mark Lavin for significant contributions to this post.) In a previous post, we introduced using Celery to schedule tasks. In this post, we address things you might need to consider when planning how to deploy Celery in production. At Caktus, we've made use of Celery in a number of projects ranging from simple tasks to send emails or create image thumbnails out of band to complex workflows to catalog and process large (10+ Gb) files for encryption and remote archival and retrieval. Celery has a number of advanced features (task chains, task routing, auto-scaling) to fit most task workflow needs. Simple Setup A simple Celery stack would contain a single queue and a single worker which processes all of the tasks as well as schedules any periodic tasks. Running the worker would be done with python manage.py celery worker -B This is assuming using the django-celery integration, but there are plenty of docs on running the worker (locally as well as daemonized). We typically use supervisord, for which there is an example configuration, but init.d, upstart, runit, or god are all viable alternatives. The -B option runs the scheduler for any periodic tasks. It can also be run … -
Celery in Production
(Thanks to Mark Lavin for significant contributions to this post.) In a previous post, we introduced using Celery to schedule tasks. In this post, we address things you might need to consider when planning how to deploy Celery in production. -
How we use Virtualenv, Buildout and Docker
There are several technologies (in the Python world) to have isolated environments for projects. In this article I will describe how we use Virtualenv, Buildout and Docker for a project I’m working on at Fox-IT. Virtualenv The first tool I’ll discuss here is Virtualenv. According to its documentation Virtualenv is a tool to create isolated Python environments. What does it do? It offers a way to install Python packages independent of the global site-packages directory. This provides you with a way to install packages even when you do not have permission to write in the global site-packages directory and it will prevent conflicts with packages installed there (or in other Virtualenv environments for that matter). For instance on my current Ubuntu 14.04 installation has the requests package globally installed. However, it is version 2.2.1. What if I need a newer version? Or worse: what if my code is incompatible with a newer version and the package is updated for some reason (perhaps with a system upgrade)? How do we use it? For the project I’m working on, we have a couple of small tools written in Python that we need running in their own separate environment (on different machines than the code of the … -
How we use Virtualenv, Buildout and Docker
There are several technologies (in the Python world) to have isolated environments for projects. In this post I will describe how we use Virtualenv, Buildout and Docker for a project I’m working on at Fox-IT. Virtualenv The first tool I’ll discuss here is Virtualenv. According to its documentation Virtualenv is a tool to create isolated Python environments. What does it do? It offers a way to install Python packages independent of the global site-packages directory. This provides you with a way to install packages even when you do not have permission to write in the global site-packages directory and it will prevent conflicts with packages installed there (or in other Virtualenv environments for that matter). For instance my current Ubuntu 14.04 installation has the requests package globally installed. However, it is version 2.2.1. What if I need a newer version? Or worse: what if my code is incompatible with a newer version and the package is updated for some reason (perhaps with a system upgrade)? How do we use it? For the project I’m working on, we have a couple of small tools written in Python that we need running in their own separate environment (on different machines than the … -
Using Postgres Composite Types in Django
Note: this post turned out to be far more complicated than I had hoped. I may write another one that deals with a less complicated type! [Postgres](http://www.postgresql.org) comes with a pretty large range of column types, and the ability to use these types in an ARRAY. There's also [JSON(B)](http://www.postgresql.org/docs/9.4/static/datatype-json.html) and [Hstore](http://www.postgresql.org/docs/9.4/static/hstore.html), which are useful for storing structured (but possibly varying) data. Additionally, there are also a range of, well, [`range`](http://www.postgresql.org/docs/9.4/static/rangetypes.html) types. However, sometimes you actually want to store data in a strict column, but that isn't a simple scalar type, or one of the standard range types. Postgres allows you to define your own composite types. There is a command [`CREATE TYPE`](http://www.postgresql.org/docs/9.2/static/sql-createtype.html) that can be used to create an arbitrary type. There are four forms: for now we will just look at Composite Types. We will create a Composite type that represents the opening hours for a store, or more specifically, the default opening hours. For instance, a store may have the following default opening hours: {% highlight text %} +------------+--------+---------+ | Day | Open | Close | +------------+--------+---------+ | Monday | 9 am | 5 pm | | Tuesday | 9 am | 5 pm | | Wednesday | …