Django community: RSS
This page, updated regularly, aggregates Community blog posts from the Django community.
-
My First Django Girls Event
Since the first Django Girls event I've watched the movement grow with a sense of of awe and inevitability. There is something about it that is both contagious and powerful, and in a very good way. This past weekend I had my first chance to attend one of their events in Ensenada, Mexico. This is what we saw. A room full of attendees with laser focus. The coaches were clearly inspired by the dedication of the women who had come to learn and grow. #djangogirls A photo posted by Daniel Greenfeld (@pydanny) on May 26, 2015 at 7:42am PDT By the end of the day, the energy hadn't dwindled, it had accelerated. Saying goodbye to #djangogirls Ensenada. Everyone stayed until the very end. A photo posted by Daniel Greenfeld (@pydanny) on May 26, 2015 at 8:10am PDT No one wanted the day to end. #djangogirls Ensenada attendees so dedicated they stayed after the event finished! :-) A photo posted by Daniel Greenfeld (@pydanny) on May 26, 2015 at 8:14am PDT We did our small part. We coached and did our best to give an inspirational talk. Programming Gives You Superpowers from Audrey & Daniel Roy Greenfeld -
Django interview questions ...
... and some answers Well I haven't conducted any interviews recently but this one has been laying in my drafts for a quite while so it is time to take it out of the dust and finish it. As I have said in Python Interview Question and Answers these are basic questions to establish the basic level of the candidates. Django's request response cycle You should be aware of the way Django handles the incoming requests - the execution of the middlewares, the work of the URL dispatcher and what should the views return. It is not necessary to know everything in the tiniest detail but you should be generally aware of the whole picture. For reference you can check "the live of the request" slide from my Introduction to Django presentation. Middlewares - what they are and how they work Middlewares are one of the most important parts in Django. Not only because they are quite powerfull and useful but also because the lack of knowledge about their work can lead to hours of debugging. From my experience the process_request and process_response hooks are the most frequently used and those are the one I always ask for. You should … -
Squashing and optimizing migrations in Django
With Django 1.7 we got built in migrations and a management command to squash a set of existing migrations into one optimized migration - for faster test database building and to remove some legacy code/history. Squashing works, but it still has some rough edges and requires some manual work to get the best of a squashed migration. Here are few tips for squashing and optimizing squashed migrations. -
Function caching decorator [reprise]
Sudden inspiration to write a blog. A long time ago, I wrote a post about a decorator that could somehow cache an expensive function. There were some ideas in the comments, but I never really followed up the idea I spawned in that particular post. And I never really gave it more thought or research either. Today I was at the PyGrunn conference. Tom Levine, author of Vlermv, held a presentation about that package. He told there that he wrote a decorator called `cache` (ctrl+f on the page I linked) that he uses for the exact same goal as I wrote my original post. He also noted that `cache` would probably be a bad name for a decorator like that. At the end of the presentation there was some time where people could ask questions. A lot of people gave Tom tips on packages he could look into and there was one helpful attendant who called out Python's `memoised` decorator. I noted that one of the commenters on my original post also named memoize, but that commenter linked to a decorator inside a Plone package. I searched a bit on the internet today and there's a class inside the PythonDecoratorLibrary … -
Pygrunn: Leveraging procedural knowledge - K Rain Leander
(One of the summaries of the 2015 Pygrunn conference ) K Rain Leander works at Red Hat and yes, she wore a bright red hat :-) She's a python and django newbie. She knows how it is to be a newbie: there is so much in linux that there are always areas where you're a complete newbie. So everyone is helpful there. "Amsterdam is the capital of the netherlands" is declarative knowledge. Procedural knowledge is things like learning to ride a bike or a knew language. So: What versus How. You might know declaratively how to swim, but procedurally you might still drown: you need to practice and try. Some background: she was a dancer in the USA. Unless you're famous, you barely scrape by financially. So she started teaching herself new languages. Both real-life languages and computer languages. Css, html for starters. And she kept learning. She got a job at Red Hat. You have to pass a RHCE certification test within 90 days of starting work there - or you're fired. She made it. She She has military background. In bootcamp, the purpose is not the pushups and the long runs. The goal is to break you down … -
Pygrunn: IPython and MongoDB as big data scratchpads - Jens de Smit
(One of the summaries of the 2015 Pygrunn conference ) A show of hand: about half the people in the room have used mongodb and half used ipython notebooks. There's not a lot of overlap. Jens de Smit works for optiver, a financial company. A "high-frequency trader", so they use a lot of data and they do a lot of calculations. They do a lot of financial transactions and they need to monitor if they made the right trades. Trading is now almost exclusively done electronically. Waving hands and shouting on the trading floor at a stock exchange is mostly a thing of the past. Match-making between supply and demand is done centrally. It started 15 years ago. The volume of transactions really exploded. Interesting fact: the response time has gone from 300ms to just 1ms! So... being fast is important in electronic trading. If you're slow, you trade at the wrong prices. Trading at the wrong prices means losing money. So speed is important. Just as making the right choices. What he had to do is to figure out how fast an order was made and wether it was a good order. Non-intrusively. So: what market event did we … -
Pygrunn: Python, WebRTC and You - Saúl Ibarra Corretgé
(One of the summaries of the 2015 Pygrunn conference ) Saúl Ibarra Corretgé does telecom and VOIP stuff for his work, which is what webRTC calls legacy :-) webRTC is Real-Time Communication for the web via simple APIs. So: voice calling, video chat, P2P file sharing without needing internal or external plugins. Basically it is a big pile of C++ that sits in your browser. One of the implementations is http://www.webrtc.org/. Some people say that webRTC stand for Well, Everybody Better Restart Their Chrome. Because the browser support is mostly limited to chrome. There's a plugin for IE/safari, though. There are several javascript libraries for webRTC. They help you set up a secure connection to another person (a "RTCPeerConnection"). The connection is directly, if possible. If not, due to firewalls for instance, you can use an external server. It uses ICE, which means Interactive Connectivity Establishment (see ICE trickle which he apparently used). A way to set up the connection. Once you have a connection, you have an RTCDataChannel. Which you can use, for instance, to send a file from one browser to another. As a testcase, he wrote Call Roulette. The app is in python, but in the browser … -
Pygrunn: Reliable distributed task scheduling - Niels Hageman
(One of the summaries of the 2015 Pygrunn conference) Note: see Niels Hageman's somewhat-related talk from 2012 . Niels works at Paylogic . Wow, the room was packed. They discovered the normal problem of operations that took too long for the regular request/response cycle. The normal solution is to use a task queue. Some requirements: Support python, as most of their code is in python. It has to be super-reliable. It also needs to allow running in multiple data centers (for redundacy). Ideally, a low-maintenance solution as they already have enough other work. Option 1: celery + rabbitMQ. It is widely used and relatively easy to use. But rabbitMQ was unreliable. With alarming frequency, the two queues in the two datacenters lost sync. They also got clogged from time to time. Option 2: celery + mysql. They already use mysql, which is an advantage. But... the combination was buggy and not-production ready. Option 3: gearman with mysql. Python bindings were buggy and non-maintained. And you could also run one gearman bundle, so multiple datacenters was out of the window. Option 4: do it yourself. They did this and ended up with "Taskman" (which I couldn't find online, they're planning on … -
Pygrunn: Data acquisition with the Vlermv database - Thomas Levine
(One of the summaries of the 2015 Pygrunn conference) Thomas Levine wrote vlermv. A simple "kind of database" by using folders and files. Python is always a bit verbose when dealing with files, so that's why he wrote vlermv. Usage: from vlermv import Vlermv vlermv = Vlermv('/tmp/a-directory') vlermv['filename'] = 'something' # ^^^ This saves a python pickle with 'something' to /tmp/a-directory/filename The advantage is that the results are always readable, even if you lose the original program. You can choose a different serializer, for intance json instead of pickle. You can also choose your own key_transformer. A key_transformer translates a key to a filename. Handy if you want to use a datetime or tuple as a key, for instance. The two hard things in computer science are: Cache invalidation. Naming things. Cache invalidation? Well, vlermv doesn't do cache invalidation, so that's easy. Naming things? Well, the name 'vlermv' comes from typing randomly on his (dvorak) keyboard... :-) Testing an app that uses vlermv is easy: you can mock the entire database with a simple python dictionary. What if vlermv is too new for you? You can use the standard library shelve module that does mostly the same, only it stores … -
Pygrunn: Laurence de Jong - Towards a web framework for distributed apps
(One of the summaries of the 2015 Pygrunn conference) Laurence de Jong is a graduate student. Everyone uses the internet. Many of the most-used sites are centralized. Centralization means control. It also gives scale advantages, like with gmail's great spam filter. It also has drawbacks. If the site goes down, it is really down. Another drawback is the control they have over our data and what they do with it. If you're not paying for it, you're the product being sold. Also: eavesdropping. Centralized data makes it easy for agencies to collect the data. And: censorship! A better way would be decentralized websites. There are existing decentralized things like Freenet, but they're a pain to install and the content on there is not the content you want to see... And part of it is stored on your harddisk... See also Mealstrom, which distributes websites as torrents. A problem there is the non-existence of proper decentralized DNS: you have unreadable hashes. A solution could be the blockchain system from bitcoin. It is called namecoin. This way, you could store secure DNS records to torrent hashes in a decentralized way. https://github.com/HelloZeroNet/ZeroNet uses namecoin to have proper DNS addresses and to download the … -
Pygrunn: Orchestrating Python projects using CoreOS - Oscar Vilaplana
(One of the summaries of the 2015 Pygrunn conference) (Note: Oscar Vilaplana had a lot of info in his presentation and also a lot on his slides, so this summary is not as elaborate as what he told us. Wait for the video for the full version.) "Orchestrating python": why? He cares about reliability. You need a static application environment. Reliable deployments. Easy and reliable continuous integration. And self-healing. Nice is if it is also portable. A common way to make scalable systems is to use microservices. You compose, mix and extend them into bigger wholes. Ideally it is "cluster-first": also locally you test with a couple of instances. A "microservices architecture". Wouldn't it be nice to take the "blue pill" and move to a different reality? One in where you have small services, each running in a separate container without a care for what occurs around it? No sysadmin stuff? And similary the smart infrastructure people only have to deal with generic containers that can't break anything. He did a little demo with rethinkdb and flask. For the demo it uses coreOS: kernel + docker + etcd. CoreOS uses a read-only root filesystem and it by design doesn't have … -
Pygrunn: ZeroMQ - Pieter Hintjens
(One of the summaries of the 2015 Pygrunn conference) Pieter Hintjens has quite some some experience with distributed systems. Distributed systems are, to him, about making our systems look more like the real world. The real world is distributed. Writing distributed systems is hard. You need a big stack. The reason that we're using http such a lot is because that was one of the first ones that is pretty simple and that we could understand. Almost everything seems to be http now. Three comments: So: the costs of such a system must be low. He really likes ZeroMQ, especially because it makes it cheap. We lack a lot of knowledge. The people that can do it well are few. Ideally, the community should be bigger. We have to build the culture, build the knowledge. Zeromq is one of the first bigger open source projects that succeeded. Conway's law: an organization will build software that looks like itself. A centralized power-hungry organization will probably build centralized power-hungry software. So: if you want to write distributed systems stuff, your organization has to be distributed! Who has meetings in his company? They are bad bad bad. They're blocking. You have to "synchronize … -
Whats So Good About Django Traceback?
When You are working on django project, if You make any errors, it will throw a simple traceback on the terminal where you started server.If you go to browser, you will find a rich traceback like this.Most Python developers, discover django-extensions within a few weeks after they start working with Django and start using Werkzeug debugger. Werkzeug has lot of advantages when compared to default Django traceback. I also used it for a while. For the same error Werkzeug throws traceback like this.One thing I really like about Django traceback is, the distinction between user code and the internal Django code. Most of the time, developers were looking for the bug in their code instead of looking for a bug in Django. So, Django makes it easier to skip over the frames that doesn't matter and focus on the one which matters most.It also shows local vars in that frame. With this You instantly look at the variables to find out why error has occured(see this Django Ticket #11834: for more discussion about this).These two features make it very easy to track down most common errors.Read more articles about Python! -
Whats So Good About Django Traceback?
When You are working on django project, if You make any errors, it will throw a simple traceback on the terminal where you started server.If you go to browser, you will find a rich traceback like this.Most Python developers, discover django-extensions within a few weeks after they start working with Django and start using Werkzeug debugger. Werkzeug has lot of advantages when compared to default Django traceback. I also used it for a while. For the same error Werkzeug throws traceback like this.One thing I really like about Django traceback is, the distinction between user code and the internal Django code. Most of the time, developers were looking for the bug in their code instead of looking for a bug in Django. So, Django makes it easier to skip over the frames that doesn't matter and focus on the one which matters most.It also shows local vars in that frame. With this You instantly look at the variables to find out why error has occured(see this Django Ticket #11834: for more discussion about this).These two features make it very easy to track down most common errors.Read more articles about Python! -
Keynote by Catherine Bracy (PyCon 2015 Must-See Talk: 4/6)
Part four of six in our PyCon 2015 Must-See Series, a weekly highlight of talks our staff enjoyed at PyCon. My recommendation would be Catherine Bracy’s Keynote about Code for America. Cakti should be familiar with Code for America. Colin Copeland, Caktus CTO, is the founder of Code for Durham and many of us are members. Her talk made it clear how important this work is. She was funny, straight-talking, and inspirational. For a long time before I joined Caktus, I was a “hobbyist” programmer. I often had time to program, but wasn’t sure what to build or make. Code for America is a great opportunity for people to contribute to something that will benefit all of us. I have joined Code for America and hope to contribute locally soon through Code for Durham. -
PyCon Sweden 2015
In a few words PyCon Sweden 2015 was awesome. Honestly, this was my first Python conference ever but I really hope it won't be the last. Outside the awesome talks and great organisation it was really nice to spend some time with similar minded people and talk about technology, the universe and everything else. I have met some old friends and made some new ones but lets get back to the talk. Unfortunately I was not able to see all of them but here is a brief about those I saw and found really interesting: It all started with Ian Ozsvald and his awesome talk about "Data Science Deployed" (slides). The most important point here were: log everything think about data quality, don't use everything just what you need think about turning data into business values start using your data Then Rebecca Meritz talked about "From Explicitness to Convention: A Journey from Django to Rails" (slides). Whether the title sounds a bit contradictive this was not the usual Django vs Rails talk. At least to me it was more like a comparison between the two frameworks, showing their differences, weak and strong sides. Whether I am a Django user, I … -
Django second AutoField
Sometimes, your ORM just seems to be out to get you. For instance, I've been investigating a technique for the most important data structure in a system to be essentially immuatable. That is, instead of updating an existing instance of the object, we always create a new instance. This requires a handful of things to be useful (and useful for querying). * We probably want to have a self-relation so we can see which object supersedes another. A series of objects that supersede one another is called a lifecycle. * We want to have a timestamp on each object, so we can view a snapshot at a given time: that is, which phase of the lifecycle was active at that point. * We should have a column that unique per-lifecycle: this makes for querying all objects of a lifecycle much simpler (although we can use a recursive query for that). * There must be a facility to prevent multiple heads on a lifecycle: that is, at most one phase of a lifecycle may be non-superseded. * The lifecycle phases needn't be in the same order, or really have any differentiating features (like status). In practice they may, but for the … -
Building a better DatabaseCache for Django on MySQL
I recently released version 0.1.10 of my library django-mysql, for which the main new feature was a backend for Django’s cache framework called MySQLCache. This post covers some of the inspiration and improvements it has, as well as a basic benchmark against Django’s built-in DatabaseCache. TL;DR - it’s better than DatabaseCache, and if you’re using MySQL, please try it out by following the instructions linked at the end. Why bother? Django’s cache framework provides a generic API for key-value storage, and gets used for a variety of caching tasks in applications. It ships with multiple backends for popular technologies, including Redis and Memcached, as well as a basic cross-RDBMS DatabaseCache. The DatabaseCache is recommended only for smaller environments, and due to its supporting every RDBMS that Django does, it is not optimized for speed. Redis and Memcached are the most popular cache technologies to use, being specifically designed to do key-value storage; you could even say Django’s cache framework is specifically designed to fit them. If they work so well, why would anyone bother using DatabaseCache, and why would I care about improving on it? Well, I have a few reasons: Fewer moving parts If you can get away with … -
Building a better DatabaseCache for Django on MySQL
I recently released version 0.1.10 of my library django-mysql, for which the main new feature was a backend for Django’s cache framework called MySQLCache. This post covers some of the inspiration and improvements it has, as well as a basic benchmark against Django’s built-in DatabaseCache. TL;DR - it’s better than DatabaseCache, and if you’re using MySQL, please try it out by following the instructions linked at the end. Why bother? Django’s cache framework provides a generic API for key-value storage, and gets used for a variety of caching tasks in applications. It ships with multiple backends for popular technologies, including Redis and Memcached, as well as a basic cross-RDBMS DatabaseCache. The DatabaseCache is recommended only for smaller environments, and due to its supporting every RDBMS that Django does, it is not optimized for speed. Redis and Memcached are the most popular cache technologies to use, being specifically designed to do key-value storage; you could even say Django’s cache framework is specifically designed to fit them. If they work so well, why would anyone bother using DatabaseCache, and why would I care about improving on it? Well, I have a few reasons: Fewer moving parts If you can get away with … -
Building a better DatabaseCache for Django on MySQL
I recently released version 0.1.10 of my library django-mysql, for which the main new feature was a backend for Django’s cache framework called MySQLCache. This post covers some of the inspiration and improvements it has, as well as a basic benchmark against Django’s built-in DatabaseCache. TL;DR - it’s better than DatabaseCache, and if you’re using MySQL, please try it out by following the instructions linked at the end. Why bother? Django’s cache framework provides a generic API for key-value storage, and gets used for a variety of caching tasks in applications. It ships with multiple backends for popular technologies, including Redis and Memcached, as well as a basic cross-RDBMS DatabaseCache. The DatabaseCache is recommended only for smaller environments, and due to its supporting every RDBMS that Django does, it is not optimized for speed. Redis and Memcached are the most popular cache technologies to use, being specifically designed to do key-value storage; you could even say Django’s cache framework is specifically designed to fit them. If they work so well, why would anyone bother using DatabaseCache, and why would I care about improving on it? Well, I have a few reasons: Fewer moving parts If you can get away with … -
Adding Maintenance Data pt 1
Join us as we continue building our product by starting to allow our users to add bike maintenance records to their bikes.Watch Now... -
Markup Language Faceoff: Lists
Today I want to talk about lists. Not for shopping, not the programming data type, but the display of items in both unordered and ordered fashion. Specifically this: Item A Item B First Numbered Inner Item Second Numbered Inner Item Item C In other words, lists of bullets and numbers. This article explores some of the different tools used by the programming world to render display lists, specifically HTML, reStructuredText, Markdown, and LaTeX. HTML If you view the HTML source of this web page, you'll find this: <ul class="simple"> <li>Item A</li> <li>Item B<ol class="arabic"> <li>First Numbered Inner Item</li> <li>Second Numbered Inner Item</li> </ol> </li> <li>Item C</li> </ul> Or more clearly: <ul class="simple"> <li>Item A</li> <li>Item B <ol class="arabic"> <li>First Numbered Inner Item</li> <li>Second Numbered Inner Item</li> </ol> </li> <li>Item C</li> </ul> This works, but is incredibly verbose. HTML requires closing tags on every element. Working with lists in HTML becomes tedious quickly. Which is why so many people use WYSIWYG tools or mark up languages like reStructuredText and Markdown, as it expedites creation of lists (and many other things). reStructuredText This blog is written in reStructuredText and transformed into HTML. Let's see the markup for this blog post: * Item … -
Markup Language Faceoff: Lists
Today I want to talk about lists. Not for shopping, not the programming data type, but the display of items in both unordered and ordered fashion. Specifically this: Item A Item B First Numbered Inner Item Second Numbered Inner Item Item C In other words, lists of bullets and numbers. This article explores some of the different tools used by the programming world to render display lists, specifically HTML, reStructuredText, Markdown, and LaTeX. HTML If you view the HTML source of this web page, you'll find this: <ul class="simple"> <li>Item A</li> <li>Item B<ol class="arabic"> <li>First Numbered Inner Item</li> <li>Second Numbered Inner Item</li> </ol> </li> <li>Item C</li> </ul> Or more clearly: <ul class="simple"> <li>Item A</li> <li>Item B <ol class="arabic"> <li>First Numbered Inner Item</li> <li>Second Numbered Inner Item</li> </ol> </li> <li>Item C</li> </ul> This works, but is incredibly verbose. HTML requires closing tags on every element (keep in mind browsers are not the same as specifications). Working with lists in HTML becomes tedious quickly. Which is why so many people use WYSIWYG tools or mark up languages like reStructuredText and Markdown, as it expedites creation of lists (and many other things). reStructuredText This blog is written in reStructuredText and transformed into HTML. … -
Q2 2015 ShipIt Day ReCap
Last Friday everyone at Caktus set aside their regular client projects for our quarterly ShipIt Day, a chance for Caktus employees to take some time for personal development and independent projects. People work individually or in groups to flex their creativity, tackle interesting problems, or expand their personal knowledge. This quarter’s ShipIt Day saw everything from game development to Bokeh data visualization, Lego robots to superhero animation. Read more about the various projects from our Q2 2015 ShipIt Day. -
Django Proxy Model Relations
I've got lots of code I'd do a different way if I were to start over, but often, we have to live with what we have. One situation I would seriously reconsider is the structure I use for storing data related to how I interact with external systems. I have an `Application` object, and I create instances of this for each external system I interact with. Each new `Application` gets a UUID, and is created as part of a migration. Code in the system uses this UUID to determine if something is for that system. But that's not the worst of it. I also have an `AppConfig` object, and other related objects that store a relation to an `Application`. This was fine initially, but as my code got more complex, I hit upon the idea of using Django's Proxy models, and using the related `Application` to determine the subclass. So, I have `AppConfig` subclasses for a range of systems. This is nice: we can even ensure that we only get the right instances (using a lookup to the application to get the discriminator, which I'd probably do a different way next time). However, we also have other bits of information …