Django community: RSS
This page, updated regularly, aggregates Community blog posts from the Django community.
-
Djangocon: the future of postgresql in django - Marc Tamlyn
(One of the summaries of a talk at 2014 djangocon.eu) Marc Tamlyn is well-known for his succesful kickstarter campaign to improve postgresql support in django. He asked for a show of hands: some 90% of the attendees use postgresql. It is the most popular database for django projects. Most of the core team favours it. It has a wide feature set and it is a proper active (non-oracle-owned) open source project. Wide feature set... but not all of the features are supported by django. You can use several add-ons, but they're not that nicely integrated. So he proposed a kickstarter project for creating django.contrib.postgres. 14k (in UK pounds) was raised by the kickstarter campaign! Wow. He thanked everyone that contributed (a couple of big contributions by companies by name). The core of the project is to add support for a couple of data types, like array, json and hstore. Array can list just about any type. Nested structure is OK. No foreign keys, though. This one is almost ready for django 1.8. HStore is a key-value store. The most requested feature! It is quite similar to django-hstore, which is already a very good add-on. They'll help him with getting HStore … -
Djangocon: web components in django - Xavier Dutreilh
(One of the summaries of a talk at 2014 djangocon.eu) Xavier Dutreilh talks about web development. You probably use libraries like jquery and backbone. And css frameworks like bootrap. And tools like bower, grunt, yeoman and jshint. One of the things you should care about is performance. Especially for mobile devices, you don't want to send over too much css/js. You don't want too exotic and hard technologies, as you as a web developer probably will lack the specialized knowledge. Accessibility is important. Cross-browser functionality, too. But... we often disregard the wishes we enumerated above and simply look for ready-to-use framework and pile them on top of each other. And we hope everything works as we hope and expect. Which it doesn't and we spend lots of time debugging. We inherit technical dept by piling everything up like this. What can we do? Well, we first have to stop making such a mess in the first place and have to start looking at the requirements we figured out. The best tip is to think about individual elements. We could also try to do it lean and mean: custom code. Start by looking at a web page. What does it consist … -
Djangocon: pair up django and web mapping - Mathieu Leplatre
(One of the summaries of a talk at 2014 djangocon.eu) Mathieu Leplatre (who works on makina corpus) says web mapping should be simple and google maps should become unusual. Fundamentals of cartography Projections and postgis. Cartography is based on location: longitude (x) and latitude (y). Normally you use decimal degrees: -180 to +180 longitude, -90 to +90 latitude. Well known from GPS tools. Problem: the earth isn't perfectly round. GPS uses the "WGS 84" reference ellipsoid to standardize the latitude and longitude. Problem 2: you often have to show it on a flat map or flat display. So you have to project the 3D data on a 2D surface. Naturally this means compromises. Projections are for instance the mercator projection that google uses. For more background, examples and especially illustrations, look at mapschool.io Transformation often happens between "WGS 84" (srid=4326, that is the most-used one for storing it in databases) and google mercator for display (srid=3857). Those scary-looking "srid" numbers? Spatial reference ID. But don't worry, those two numbers are almost always the default value for databases and tools. Another fundamental concept is the data format. Basically you have either vector data (points, lines, polygons) or rasters (bitmap data). Vector … -
Djangocon: two short talks about django-oscar and about performance gains
(One of the summaries of a talk at 2014 djangocon.eu) An introduction to django-oscar - David Winterbottom David Winterbottom is, which I just now noticed, the originator of the very useful www.commandlinefu.com website that I've had in in my RSS reader for the past few years. Recommended: you get a useful tip out of it every month or so. More related to django: he's the author of the e-commerce framework 'django-oscar'. In e-commerce, especially B2B (business to business), you get loads and loads of exceptions. He originally worked on a PHP system that basically broke down under all those exceptions to the standard rules. He now works on django-oscar, which is used by a number of very large customers. Ecommerce: products, baskets, orders. Overridable apps. It could be added to your site without modifying anything: it won't take over your whole site. It doesn't use the django admin though, but a custom edit site. Philosophically, oscar isn't an out-of-the box product. It is a generic product, so it cannot know how your tax system works. Which payment provider you use, How you've handled shipping. So you'll have to add that yourself. Oscar provides a base platform you can build upon. … -
Djangocon: distributed systems - Raphael Barrois
(One of the summaries of a talk at 2014 djangocon.eu) Raphael Barrois has this definition for distributed systems: they are a set of autonomous, interconnected services. Possibly a different codebase. Each service with its own database. Why? When? For elaborate systems you can make a nice tidy elaborate architecture. But after two years of bugfixing and feature-adding and changes it inevitably becomes a huge sprawling mess. So... split it up! Into separate components. When would you start doing something distributed-like? When a new feature has to be added that's not an incremental improvement. A feature that is strongly disconnected from the core features. Another reason can be that you install a new version of your main project somewhere, possibly in another datasenter or with a separate database. It still has to be operated by the same ops team, though. Or there could be technical reasons like scaling/sharding or geographic expansion or debundling/upgrading. Building it Where should you start? A good point is to extract generic code. misc, util, tools. Everything that's not business-specific. Perhaps also some basic business components: common models, UI, etc. Put this kind of code into separate models and adapt your deployment process for fast-moving internal dependencies … -
Djangocon: two talks, ecology and healthchecks
(One of the summaries of a talk at 2014 djangocon.eu. Two (short) talks actually.) Django powered ecological data - Jakub Witold Bubnicki Jakub Witold Bubnicki works on data-intensive science. For instance for biological sciences, more and more data is becoming available. Globally. In data intensive research, you often have to integrate and adapt multiple sources. Smaller teams (let anone individual PhD students) can often not create the infrastructure needed. He collects photos from his "camera traps" for two years now. He has to integrate this with geographical data and, for instance, remote temperature datasets. Often, the data includes both a space and a time component (location and date). The collected data needs to be easily accessible, discoverable, shareable and reusable. This is often not the case. So he started designing a django architecture to fix this. He also uses lots of existing applications like metacat (catalogs), geoserver (geographical data) and rasdaman (rasters). Django provides the authentication, filtering, browsing, searching, permissions. It connects to the external applications. The django site itself also has an API so that others can connect to it. Note: they're still working on it, it is not finished. But the generic data browsing and a bunch of … -
Djangocon: visibility for web developers - Bruno Renie
(One of the summaries of a talk at 2014 djangocon.eu) Note beforehand: Bruno Renie had several useful things to say at previous djangocons. I'd like to draw specific attention to his lightning talk last year about settings. He later wrote a more elaborate version on his own blog. Something I'm gonna take a deeper look at in the coming weeks as I'll have to fix something in many of our internal sites :-) He works in a diverse team and in a quite complex infrastructure, so they need to make their infrastructure visible: errors, events, metrics. Errors: easy: use sentry. Events: basically, a log call. The errors and sentry mentioned above are great for developers, but you often need info on the requests that don't fail, too. For this, you need centralized logging: searching on several machines in logfiles doesn't cut it. As an aggregator, they use logstash + eleasticsearch. They use kibana as a frontend. rsyslog/syslog-ng can work with logstash-forwarder ("lumberjack") to forward everything to logstash. In python, you can use a sysloghandler to send everything to the syslog. You have to use structured logging. A formatted string is nice for the logfile, but not if you want to … -
Djangocon keynote: where the wild things are - Aymeric Augustin
(One of the summaries of a talk at 2014 djangocon.eu) Aymeric Augustin talks about app loading. App loading is a step in django's initialization sequence: import all models and populate a cache. It is also a project that improves this step. The trac ticket for this feature is seven years old... Many many people have worked on it and provided patches and branches. The final implementation looks like this: from django.apps import AppConfig class MyConfig(AppConfig): name = ... label = ... verbose_name = ... # and in the settings file: INSTALLED_APPS = ( 'some_app', # Old style. 'yourapp.apps.MyConfig', # Points to the config class. ... All nice, but in practice there were crashes when using the implementation in production with gunicorn. The root cause was that django doesn't have a proper signal that says that it is fully configured and ready. So he started digging into the code. And found out that actually reading all the models and setting everything up occurs at various points in time, depending on how you call it. "runserver" is different from "shell" which is different from "running it from wsgi". At the core is the AppCache that django uses behind the scenes. This looks … -
Djangocon: tuesday lightning talks
(One of the summaries of a talk at 2014 djangocon.eu) Migrate an existing web application to django - Samuel Goldszmidt What to do when you have a PHP/mysql app and want to turn it into a django site? Set up a django site with the latest version (1.7) with migrations included. Add your database settings. Create an app and use manage.py inspectdb > yourapp/models.py to inspect your database and generate models. Then create the default core django tables (just run manage.py migrate). Start the initial migration manage.py makemigrations yourapp. Start renaming tables and fixing things up and turn those steps into migrations. At the end you have a nice clean set of models including automatic migrations. You could use the "squashmigrations" command to get a new clean baseline migration after you're ready. Harness the speed of the wheel - Xavier Fernandez Wheel is python's new (binary) package format. It is a zip format archive, you can basically extract it and it is ready. It has a specially formatted file name that includes information for the target architecture (as it can contained compiled code). Installing from wheels is 4 or 5 times faster for Django. For something that is heavy in … -
Internet & mobile connectivity
Internet & mobile connectivity The conference and sprint venues will of course be furnished with a suitable wireless network. You'll also be able to get wireless access from your hotel apartments and rooms. You may all the same wish to buy a SIM card for mobile connectivity. You need to do this before you arrive on the island. We recommend Orange, Bouygues or SFR. -
Django sticky queryset filters
In Django, Stuff.objects.filter(a=1).filter(b=1) is almost always the same as Stuff.objects.filter(a=1, b=1). Everyone knows and expects this, and it's very well documented. However, Stuff.objects.filter(rel__a=1).filter(rel__b=1) might not be the same as Stuff.objects.filter(rel__a=1, rel__b=1). This is also very well documented, but in my option this behavior is not always intuitive. Lets take an example: class Tag(models.Model): name = models.CharField(max_length=100) class Entry(models.Model): tags = models.ManyToManyField(Tag) Now if we run Entry.objects.filter(tags__name='stuff') we'd get roughly something like: SELECT `app_entry`.`id` FROM `app_entry` INNER JOIN `app_entry_tags` ON (`app_entry`.`id` = `app_entry_tags`.`entry_id`) INNER JOIN `app_tag` ON (`app_entry_tags`.`tag_id` = `app_tag`.`id`) WHERE `app_tag`.`name` = 'stuff' If we run Entry.objects.filter(tags__name='stuff').filter(tags__name='other') we'd get roughly something like: SELECT `app_entry`.`id` FROM `app_entry` INNER JOIN `app_entry_tags` ON (`app_entry`.`id` = `app_entry_tags`.`entry_id`) INNER JOIN `app_tag` ON (`app_entry_tags`.`tag_id` = `app_tag`.`id`) INNER JOIN `app_entry_tags` T4 ON (`app_entry`.`id` = T4.`entry_id`) INNER JOIN `app_tag` T5 ON (T4.`tag_id` = T5.`id`) WHERE (`app_tag`.`name` = 'stuff' AND T5.`name` = 'other') Two JOIN is exactly what we wanted - a WHERE with a single JOIN wouldn't make sense anyway. A different example* Suppose we want to model books that have multiple authors: class Author(models.Model): nationality = models.CharField(max_length=100) sex = models.CharField(max_length=1) birth = models.DateField(max_length=100) alive = models.BooleanField(default=True) class Book(models.Model): authors = models.ManyToManyField(Author) What if we want to get … -
Django sticky queryset filters
In Django, Stuff.objects.filter(a=1).filter(b=1) is almost always the same as Stuff.objects.filter(a=1, b=1). Everyone knows and expects this, and it's very well documented. However, Stuff.objects.filter(rel__a=1).filter(rel__b=1) might not be the same as Stuff.objects.filter(rel__a=1, rel__b=1). This is also very well documented, but in my option this behavior is not always intuitive. Lets take an example: class Tag(models.Model): name = models.CharField(max_length=100) class Entry(models.Model): tags = models.ManyToManyField(Tag) Now if we run Entry.objects.filter(tags__name='stuff') we'd get roughly something like: SELECT `app_entry`.`id` FROM `app_entry` INNER JOIN `app_entry_tags` ON (`app_entry`.`id` = `app_entry_tags`.`entry_id`) INNER JOIN `app_tag` ON (`app_entry_tags`.`tag_id` = `app_tag`.`id`) WHERE `app_tag`.`name` = 'stuff' If we run Entry.objects.filter(tags__name='stuff').filter(tags__name='other') we'd get roughly something like: SELECT `app_entry`.`id` FROM `app_entry` INNER JOIN `app_entry_tags` ON (`app_entry`.`id` = `app_entry_tags`.`entry_id`) INNER JOIN `app_tag` ON (`app_entry_tags`.`tag_id` = `app_tag`.`id`) INNER JOIN `app_entry_tags` T4 ON (`app_entry`.`id` = T4.`entry_id`) INNER JOIN `app_tag` T5 ON (T4.`tag_id` = T5.`id`) WHERE (`app_tag`.`name` = 'stuff' AND T5.`name` = 'other') Two JOIN is exactly what we wanted - a WHERE with a single JOIN wouldn't make sense anyway. A different example * Suppose we want to model books that have multiple authors: class Author(models.Model): nationality = models.CharField(max_length=100) sex = models.CharField(max_length=1) birth = models.DateField(max_length=100) alive = models.BooleanField(default=True) class Book(models.Model): authors = models.ManyToManyField(Author) What if we want to … -
Django sticky queryset filters
In Django, Stuff.objects.filter(a=1).filter(b=1) is almost always the same as Stuff.objects.filter(a=1, b=1). Everyone knows and expects this, and it's very well documented. However, Stuff.objects.filter(rel__a=1).filter(rel__b=1) might not be the same as Stuff.objects.filter(rel__a=1, rel__b=1). This is also very well documented, but in my option this behavior is not always intuitive. Lets take an example: class Tag(models.Model): name = models.CharField(max_length=100) class Entry(models.Model): tags = models.ManyToManyField(Tag) Now if we run Entry.objects.filter(tags__name='stuff') we'd get roughly something like: SELECT `app_entry`.`id` FROM `app_entry` INNER JOIN `app_entry_tags` ON (`app_entry`.`id` = `app_entry_tags`.`entry_id`) INNER JOIN `app_tag` ON (`app_entry_tags`.`tag_id` = `app_tag`.`id`) WHERE `app_tag`.`name` = 'stuff' If we run Entry.objects.filter(tags__name='stuff').filter(tags__name='other') we'd get roughly something like: SELECT `app_entry`.`id` FROM `app_entry` INNER JOIN `app_entry_tags` ON (`app_entry`.`id` = `app_entry_tags`.`entry_id`) INNER JOIN `app_tag` ON (`app_entry_tags`.`tag_id` = `app_tag`.`id`) INNER JOIN `app_entry_tags` T4 ON (`app_entry`.`id` = T4.`entry_id`) INNER JOIN `app_tag` T5 ON (T4.`tag_id` = T5.`id`) WHERE (`app_tag`.`name` = 'stuff' AND T5.`name` = 'other') Two JOIN is exactly what we wanted - a WHERE with a single JOIN wouldn't make sense anyway. A different example * Suppose we want to model books that have multiple authors: class Author(models.Model): nationality = models.CharField(max_length=100) sex = models.CharField(max_length=1) birth = models.DateField(max_length=100) alive = models.BooleanField(default=True) class Book(models.Model): authors = models.ManyToManyField(Author) What if we want to … -
Custom queries III: MySQL
Ok, we saw how to connect to an SQLite database in Python. Now let's see MySQL. There's only a small difference in syntax, but it's significant. The steps, however, are the same.Import the libraryimport MySQLdb as msNote: The as specifies an alternate name for the library. In this case, I won't have to type MySQLdb again and again. I'll just type ms.Establish a connectionconn = ms.connect (host, username, password, database_name)If you're running this on a machine, normally host is localhost. You'll have to create a root user, which you'll have to see how to do in your particular OS on the Internet. Provide that username and password. Then execute the command create database db_name and use this database as the fourth field.Exhausted? So am I. I know it's a lot of effort, but trust me, the results are totally worth it.Create a cursor as usualcur = conn.cursor ()Remember I said %s doesn't work in SQLite? Well, that's the only thing that works here. Let's write a query.query = "select name from employee where empid = %s" % empidREAD THIS VERY CAREFULLY:Now, empid is an integer, but you still have to pass is as a string. Assume empid is 1234. The … -
Django and IPython Notebook
The IPython Notebook is a really cool application, and I've always wanted to use it during Django development and debugging. The only problem is that it requires a lot of dependencies and I feel no need to encumber my production projects with those dependencies for a feature that I ... -
Supervisor with Django and Gunicorn
Supervisor with Django: A starter guide This post assumes that you have used gunicorn and know what it does. I will try everything inside a virtual environment and hope you do the same. What is supervisor. Supervisor is a monitoring tool that can monitor your processes. It can restart the process if the process dies or gets killed for some reason. Use of supervisor: Why I started using it. In production, I use gunicorn as web server. I started a gunicorn process as a daemon and logged out from the server. My site ran as expected for few days. All of a sudden, we started getting '502 Bad Gateway' and I had no idea why. I had to ssh to the server to find out what went wrong. After ps aux | grep gunicorn, I found out gunicorn wasn't running anymore. My gunicorn process died on its own, and I had no idea when and why. Had I used supervisor, supervisor would have been controlling the gunicorn process. It must have recieved a signal when gunicorn died and it would have created a new gunicorn process in such scenario. And my site would have kept running as expected. Other scenario … -
PyGrunn: Processes, data and (the) rest - Henk Doornbos
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). Henk Doornbos gave a talk in 2011 at PyGrunn about large codebases. Nice talk, that one, so I was looking forward to his talk this year. Programming is nice, especially with Python, but in the end the most important thing is to make sure you've handled all the processes and all the data in whatever you've got to build. Designing information systems. How do you do it? Talking to people, reading, brainstorming. It is hard to determine which data you have to work with. A big problem is that data is often stored in databases. Often legacy databases. And databases are much harder to refactor than code! So figuring out the data is very important. Getting the data structure out of written use cases is quite some work. You can look for nouns, for names: those often give important clues. But you'll still miss things. Trying to describe the process also gives clues. Often the process description language/diagram and the data language/diagram don't match. What he's looking for is a reliable, repeatable way to: Precisely describe a business process Find the complete and sufficient set … -
PyGrunn: Geoprocessing with Python - Greg Kowal
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). Grek Kowal will talk about geoprocessing with GDAL/OGR. Geoprocessing is a lot about map projections. The earth is round, but not quite. And "round" needs to be mapped on a flat square screen, so you need to to reproject. Lots of ways to do that. What do people do with geoprocessing? Analyzing sensor data, calculating optimal traffic routes, flood risks, etc. Often you have to take multiple data sources and merge them somehow to end up with a proper map. What's there in python? Shapely is nice and pythonic, but it cannot handle many data sources. ArcPy is proprietary. GDAL/OGR is a real swiss army knife: useful like hell, but ugly. QGIS is a graphical swiss knife, very handy for visually toying with your data. GDAL is for rasters (bitmap), OGR for vectors (point, line, polygon). They're now distributed together. (Note: if you manage to install it on OSX you're a king: it is hard...). Why are they so interesting? OGR supports 78 formats from postgresql to the Czech cadastral exchange data format. GDAL in turn supports 133 formates from PNG and JPEG to a … -
PyGrunn: Modern authentication in python web apps - Arthur Barseghyan
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). Arthur Barseghyan talks about SSO (single sign on) and two-factor authentication. Single sign on If you have multiple web frameworks and websites, normally every one of them would need a user database and its own authentication system. Without SSO, you could perhaps (bad idea) pick one of them and make that the leading one and hack the rest to support that one. Or you'd expect users to log in multiple times (also a bad idea). Or you could use a custom API to let the sites communicate their authentication data (also a bad idea). With single sign on you don't have many of these problems. As an example, he uses (JaSig) CAS , a java enterprise single sign-on solution. There are a whole lot of plugins. It is open source, scalable and well documented. It supports lots of backends. For logging in you need three parties: a web browser, the CAS server, your application server. Your application server functions as a CAS client. Pro: Centralised authentication for all frameworks and applications. No problem when one app is in Django and the other one in Flask … -
PyGrunn: Documentation is king - Kenneth Reitz
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). Note: Kenneth Reitz gave another talk earlier during the day called "growing open source seeds". I was at a different talk, but the title is the same as a talk he gave at last year's EU djangocon. And I do have a summary of that one. A good talk! Note 2: can't get enough? He also compared Django and Flask at the 2013 EU djangocon. Recommended, mainly because of his suggestion to use open source all the things as your software architecture. "It is a helpful mindset to at least treat everything you make like it will be open sourced, even if you won’t actually do it." Anyway, on to his actual talk about documentation! He found a trend in all the stuff he does: the best things are simpler. For instance: prime lenses, handheld games, pen and paper, mechanical watch, a single carry-on bag. Constraints foster creativity. So something that constrains helps you get creative. Kenneth needs simple things to function well. The most well-known Python thing he's build is the requests library. Nice and simple and waaaaay more useful than the default python … -
PyGrunn: SSL, CAs and keeping your stuff safe - Armin Ronacher
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). Armin often talks at PyGrunn. I've got three older summaries: the future of wsgi and python 3 (2010), the state of python and the web (2011), and I am doing http wrong (2012). The one I liked most was his 2011 talk at the EU djangocon about the impact of Django. He's proposing a new title for his talk: a capitalistic and system conformant talk about encryption. Well, no, it is just about SSL. He's working at "splash damage/fireteam": infrastructure for games. They want to keep someone from gaming the games, so they encrypt everything. (Note: I like one of their ipad games very much, rad soldiers). It is too easy to forget the bigger picture. He uses an analogy between bitcoin and credit cards. Bitcoins are completely encrypted and whatever. The credit card number, in contrast, is very insecure and unencrypted. But the whole credit card process is very secure. If someone steals your bitcoin key, you lose all the money, but you won't have the same problem with your credit card if it were stolen. So: think about the bigger picture! Regarding encryption: … -
PyGrunn: Writing idiomatic python - Jeff Knupp
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). Jeff Knupp wrote an ebook about writing pythonic code. The subtitle of his talk is towards comprehensible and maintainable code. "Idiomatic python" doesn't mean "idiotic snake". It means "pythonic code". Code written in the way the Python community has agreed it should be written. Who decided this? Well, all the python developers through the code they write, share and criticize. The patterns you see there. Who really decides is sometimes the BDFL (Guido) or a PEP. Why would you? Three reasons: Readability. This helps people read your code. You keep the "cognitive burden" low. If I have to think during reading your code, reading your code is harder. I don't want to remember things if it isn't necessary. "Cognitive burden" is the best measure of readability. Obligatory Knuth quote, paraphrased: write code to explain to a human what we want the computer to do, don't write just for the computer. Maintainability. Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. Correctness. If you're the only one that can read your code, correctness … -
PyGrunn: gevent - Denis Bilenko
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). gevent gives you something that works pretty much like threads without many of the drawbacks. Denis Bilenko is the author of it. It works by using user-level event loops. Gevent mostly works at the level of IO, for instance for web connections, sockets, processes, etc. Gevent tries to be very stdlib compatible. Gevent modules are mostly drop-in replacements for python standard lib ones: just change the import and you use the gevent version. This helps a lot with the learning curve ("just the generic stdlib way of working") and it also helps by forcing backward compatibleness ("it has to work like the stdlib version"). Note: gunicorn, used a lot for Django, uses gevent behind the scenes. Gevent was written to avoid the complexity of event loops. Event loops mean you give up most of the control of your code to whatever is happening inside those loops. Regular python exception handling becomes almost impossible. And you have to give up context managers. And the common synchronous programming style. Giving all this up is not needed: use gevent. Gevent uses "greenlets" behind the scenes: so-called "stackful co-routines". … -
PyGrunn: Sphinx plus Robot framework - Pawel Lewicki
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). Pawel Lewicki talks about documention as the result of functional testing. Sphinx is Python's standard way of creating documentation from rst files. The second technology he uses is the Robot framework, including the selenium2 plugin for headless browser testing. Robot framework test files are readable text files. Customers can read it pretty well, the tests contain regular English like "open browser to login page", "input text id_login demo-user", "click button css=.primary-action". Certain words in those text files are "keywords": words with special meaning to a test plugin. "Click button" is one of the selenium2 ones, for instance. There's also a selenium2screenshots plugin with keywords like "add pointy note". And especially "capture": capture a screenshot. This gets placed somewhere in the sphinx doc directory and can be included in the documentation. Screenshots that are always up to date!! He showed some examples. Robot framework code can be included in sphinx, with the proper plugins installed, with a .. code:: robotframework statement, followed by the robot code. A regular .. image:: instruction then includes the screenshot. Adjusting the screenshots regarding viewpoint height and width is possible. For … -
PyGrunn: Advanced continuous integration - Dirk Zittersteyn
(One of the summaries of the one-day 2014 PyGrunn conference in Groningen in the Netherlands). Dirk Zittersteyn introduces us to continuous integration. Always use version control. Even when you're programming on your own. Git, mercurial, subversion. Use branches. Keep the master/trunk branch working always. The master needs to be deliverable at all times. Changes and fixes and new features are made on branches and merged when ready. "Always be integrating". Modified quote from a movie. The mainline should always be "green". All the tests should run at all time. A problem is that agreements aren't always reality. Not everything is tested. You still can get broken code in. Just search github for the string "removed debug statement"... And two branches might be green, but combined they might still bring your mainline down. So... you must check the merge. And, really, you shouldn't merge manually. The mainline shouldn't be merged by mortals, a script should do it. The merge you're trying out should be, again, on a separate branch and only be merged into the mainline when green. Doing this manual isn't good, so they let Jenkins do it. The documentation is on https://github.com/paylogic/paylogic-jenkins-plugins . One of the things they use …