Django community: RSS
This page, updated regularly, aggregates Community blog posts from the Django community.
-
Integration Of GitHub API with python django
Using Github integration by Django, we can get the user verified email id, general information, git hub URL, id, disk usage, public, private repo's, gists and followers, following in a less span of time. These Following steps are needed for Github integration: 1. creating git hub app 2. Authenticating user and getting an access token. 3. Get user information, work history using access token. 1. Creating Github App a. To create an app, click on create an application on top of a page. Here you can give application name then the application will be created. b. Now you can get the client id, secret of an application and you can give redirect urls of your applications. 2. Authenticating user and getting an access token. a. Here We have to create a GET request for asking user permission. POST "https://github.com/login/oauth authorize?client_id=GIT_APP_ID&redirect_uri=REDIRECT_URL&scope=user,user:email&state=dia123456789ramya" GIT_APP_ID: your application client id, SCOPE: List of permissions to request from the person using your app REDIRECT_URI: The url which you want … -
Implement search with Django-haystack and Elasticsearch Part-I
Haystack works as search plugin for django. You can use different back ends Elastic-search, Whose, Sorl, Xapian to search objects. All backends work with same code. In this post i am using elasticsearch as backend. Installation: pip install django-haystack Configuration: add haystack to installed apps INSTALLED_APPS=[ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', #add haystack here 'haystack', 'books' ] Settings: Add back-end settings for haystack. HAYSTACK_CONNECTIONS = { 'default': { 'ENGINE': 'haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine', 'URL': 'http://127.0.0.1:9200/', 'INDEX_NAME': 'haystack_books', }, } Above settings for elastic search. Add signal processor for haystack. This signal will update objects in index. HAYSTACK_SIGNAL_PROCESSOR = … -
Setting Up Coveralls for Django Project
Why coveralls? Coveraslls will check the code coverage for your Django project test cases. To use coveralls.io your code must be hosted on GitHub or BitBucket. install coveralls pip install coveralls Using Travis If you are using Travis for you CI. add below script in .travis.yml file in project root folder language: python # python versions python: - "3.4" - "2.7.4" env: -DJANGO=1.8 DB=sqlite3 # install requirements install: - pip install -r requirements.txt - pip install coveralls # To run tests script: - coverage run --source=my_app1, my_app2 manage.py test # send coverage report to coveralls after_success: coveralls Signup with GitHub in https://coveralls.io/ and activate coveralls for you repo. Thats it. Happy Testing... -
Extract text with OCR for all image types in python using pytesseract
What is OCR? Optical Character Recognition(OCR) is the process of electronically extracting text from images or any documents like PDF and reusing it in a variety of ways such as full text searches. In this blog, we will see, how to use 'Python-tesseract', an OCR tool for python. pytesseract: It will recognize and read the text present in images. It can read all image types - png, jpeg, gif, tiff, bmp etc. It’s widely used to process everything from scanned documents. Installation: $ sudo pip install pytesseract Requirements: * Requires python 2.5 or later versions. * And requires Python Imaging Library(PIL). Usage: From the shell: $ ./pytesseract.py test.png Above command prints the recognized text from image 'test.png'. $ ./pytesseract.py -l eng test-english.jpg Above command recognizes english text. In Python Script: import Image from tesseract import image_to_string print image_to_string(Image.open('test.png')) print image_to_string(Image.open('test-english.jpg'), lang='eng') To Know more about our Django CRM(Customer Relationship Management) Open Source Package. Check Code -
How to Create your own e-commerce shop using Django-Oscar.
Oscar is an open-source ecommerce framework for Django. Django Oscar provides a base platform to build an online shop. Oscar is built as a highly customisable and extendable framework. It supports Pluggable tax calculations, Per-customer pricing, Multi-currency etc. 1. Install Oscar $ pip install django-oscar 2. Then, create a Django project $ django-admin.py startproject <project-name> After creating the project, add all the settings(INSTALLED_APPS, MIDDLEWARE_CLASSES, DATABASES) in your settings file And you can find the reference on how to customize the Django Oscar app, urls, models and views here. Customising/Overridding templates: To override Oscar templates, first you need to update the template configuration settings as below in your setting file. import os location = lambda x: os.path.join( os.path.dirname(os.path.realpath(__file__)), x) TEMPLATE_LOADERS = ( 'django.template.loaders.filesystem.Loader', 'django.template.loaders.app_directories.Loader', 'django.template.loaders.eggs.Loader', ) from oscar import OSCAR_MAIN_TEMPLATE_DIR TEMPLATE_DIRS = ( location('templates'), OSCAR_MAIN_TEMPLATE_DIR, ) Note: In the 'TEMPLATE_DIRS' setting, you have to include your project template directory path first and then comes the Oscar's template folder which you can import from oscar. By customising templates, you can just replacing all the content with your own content or you can only change blocks using "extends" Ex: Overriding Home page {% extends 'oscar/promotions/home.html' %} {% block content %} Content goes here … -
Mark Lavin to Give Keynote at Python Nordeste
Mark Lavin will be giving the keynote address at Python Nordeste this year. Python Nordeste is the largest gathering of the Northeast Python community, which takes place annually in cities of northeastern Brazil. This year’s conference will be held in Teresina, the capital of the Brazilian state of Piauí. -
NGINX for static files for dev python server
When you work on the backend part of django or flask project and there are many static files, sometimes the development server becomes slow. In this case it’s possible to use nginx as reverse proxy to serve static. I’m using nginx in docker and the configuration is quite simple. Put in some directory Dockerfile and default.conf.tmpl. Dockerfile 1 2 3 4 5 FROM nginx:1.9 VOLUME /static COPY default.conf.tmpl /etc/nginx/conf.d/default.conf.tmpl EXPOSE 9000 CMD envsubst '$APP_IP $APP_PORT' < /etc/nginx/conf.d/default.conf.tmpl > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;' default.conf.tmpl 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 server { listen 9000; charset utf-8; location /site_media { alias /static; } location / { proxy_pass http://${APP_IP}:${APP_PORT}; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Build image with docker build -t dev-nginx . command. To run it: 1 docker run --rm -it -v `pwd`/static:/static -p 9000:9000 -e APP_IP=<your ip from ifconfig> -e APP_PORT=8000 dev-nginx Then you can access your development server though http://<localhost|docker-machine-ip>:9000. -
Using Django's built in signals and writing custom signals.
Django has a beautiful feature of signals which will record all the actions performed on the particular model. In the current blog post, we’ll learn how to use Django's built-in signals and how to create custom signal Using Django’s built in Signals: Django has a lot of built-in signals like pre_save, post_save, pre_delete and post_delete and etc., For more information about Django's built-in signals visit https://docs.djangoproject.com/en/1.9/ref/signals/. Now we’ll learn how to use Django's pre_delete signal with a simple example. In the way we use pre_delete in the present blog post we can use other signals also in the same way. We have two models called Author and Book their models are defined in models.py as below. # In models.py from django.db import models class Author(models.Model): full_name = models.CharField(max_length=100) short_name = models.CharField(max_length=50) class Book(models.Model): title = models.CharField(max_length=100) slug = models.SlugField(max_length=100) content = model.TextField() status = models.CharField(max_length=10, default=”Drafted”) author_id = model.PositiveIntegerField(null=True) In the above two models we are not having an author as foreignKey to Book model, so by default when the Author gets deleted it won’t delete all the Books written by the author. This is the … -
Pygrunn: Micropython, internet of pythonic things - Lars de Ridder
(One of my summaries of the one-day 2016 PyGrunn conference). micropython is a project that wants to bring python to the world of microprocessors. Micropython is a lean and fast implementation of python 3 for microprocessors. It was funded in 2013 on kickstarter. Originally it only ran on a special "pyboard", but it has now been ported to various other microprocessors. Why use micropython? Easy to learn, with powerful features. Native bitwise operations. Ideal for rapid prototyping. (You cannot use cpython, mainly due to RAM usage.) It is not a full python, of course, they had to strip things out. "functools" and "this" are out, for instance. Extra included are libraries for the specific boards. There are lots of memory optimizations. Nothing fancy, most of the tricks are directly from compiler textbooks, but it is nice to see it all implemented in a real project. Some of the supported boards: Pyboard The "BBC micro:bit" which is supplied to 1 million school children! Wipy. More of a professional-grade board. LoPy. a board which supports LoRa, an open network to connect internet-of-things chips. Development: there is one full time developer (funded by the ESA) and two core contributors. It is stable and … -
Pygrunn: Kliko, compute container specification - Gijs Molenaar
(One of my summaries of the one-day 2016 PyGrunn conference). Gijs Molenaar works on processing big data for large radio telescopes ("Meerkat" in the south of Africa and "Lofar" in the Netherlands). The data volumes coming from such telescopes are huge. 4 terabits per seconds, for example. So they do a log of processing and filtering to get that number down. Gijs works on the "imaging and calibration" part of the process. So: scientific software. Which is hard to install and fragile. Especially for scientists. So they use ubuntu's "lauchpad PPA's" to package it all up as debian packages. The new hit nowadays is docker. Containerization. A self-contained light-weight "virtual machine". Someone called it centralized agony: only one person needs to go through the pain of creating the container and all the rest of the world can use it... :-) His line of work is often centered around pipelines. Data flows from one step to the other and on to the next. This is often done with bash scripts. Docker is nice and you can hook up multiple dockers. But... it is all network-centric: a web container plus a database container plus a redis container. It isn't centered on data … -
Pygrunn: django channels - Bram Noordzij/Bob Voorneveld
(One of my summaries of the one-day 2016 PyGrunn conference). Django channels is a project to make Django to handle more than "only" plain http requests. So: websockets, http2, etc. Regular http is the normal request/response cycle. Websockets is a connection that stays open, for bi-directional communication. Websockets are technically an ordered first-in first-out queue with message expiry and at-most-once delivery to only one listener at the time. "Django channels" is an easy-to-understand extension of the Django view mechanism. Easy to integrate and deploy. Installing django channels is quick. Just add the application to your INSTALLED_APPS list. That's it. The complexity happens when deploying it as it is not a regular WSGI deployment. It uses a new standard called ASGI (a = asynchronous). Currently there's a "worker service" called daphne (build in parallel to django channels) that implements ASGI. You need to configure a "backing service". Simplified: a queue. They showed a demo where everybody in the room could move markers over a map. Worked like a charm. How it works behind the scenes is that you define "channels". Channels can recieve messages and can send messages to other channels. So you can have channel for reading incoming messages, do … -
Pygrunn: Understanding PyPy and using it in production - Peter Odding/Bart Kroon
(One of my summaries of the one-day 2016 PyGrunn conference). pypy is "the faster version of python". There are actually quite a lot of python implementation. cpython is the main one. There are also JIT compilers. Pypy is one of them. It is by far the most mature. PyPy is a python implementation, compliant with 2.7.10 and 3.2.5. And it is fast!. Some advantages of pypy: Speed. There are a lot of automatic optimizations. It didn't use to be fast, but since 5 years it is actually faster than cpython! It has a "tracing JIT compiler". Memory usage is often lower. Multi core programming. Some stackless features. Some experimental work has been started ("software transactional memory") to get rid of the GIL, the infamous Global Interpreter Lock. What does having a "tracing JIT compiler" mean? JIT means "Just In Time". It runs as an interpreter, but it automatically identifies the "hot path" and optimizes that a lot by compiling it on the fly. It is written in RPython, which is a statically typed subset of python which translates to C and is compiled to produce an interpreter. It provides a framework for writing interpreters. "PyPy" really means "Python written in … -
Pygrunn: simple cloud with TripleO quickstart - K Rain Leander
(One of my summaries of the one-day 2016 PyGrunn conference). What is openstack? A "cloud operating system". Openstack is an umbrella with a huge number of actual open source projects under it. The goal is a public and/or private cloud. Just like you use "the internet" without concerning yourself with the actual hardware everything runs on, just in the same way you should be able to use a private/public cloud on any regular hardware. What is RDO? Exactly the same as openstack, but using RPM packages. Really, it is exactly the same. So a way to get openstack running on a Red Hat enterprise basis. There are lots of ways to get started. For RDO there are three oft-used ones: TryStack for trying out a free instance. Not intended for production. PackStack. Install openstack-packstack with "yum". Then you run it on your own hardware. TripleO (https://wiki.openstack.org/wiki/TripleO). It is basically "openstack on openstack". You install an "undercloud" that you use to deploy/update/monitor/manage several "overclouds". An overcloud is then the production openstack cloud. TripleO has a separate user interface that's different from openstack's own one. This is mostly done to prevent confusion. It is kind of heavy, though. The latest openstack release … -
Pygrunn: from code to config and back again - Jasper Spaans
(One of my summaries of the one-day 2016 PyGrunn conference). Jasper works at Fox IT, one of the programs he works on is DetACT, a fraud detection tool for online banking. The technical summary would be something like "spamassassin and wireshark for internet traffic". Wireshark-like: DetACT intercepts online bank traffic and feeds it to a rule engine that ought to detect fraud. The rule engine is the one that needs to be configured. Spamassassin-like: rules with weights. If a transaction gets too many "points", it is marked as suspect. Just like spam detection in emails. In the beginning of the tool, the rules were in the code itself. But as more and more rules and exceptions got added, maintaining it became a lot of work. And deploying takes a while as you need code review, automatic acceptance systems, customer approval, etc. From code to config: they rewrote the rule engine from start to work based on a configuration. (Even though Joel Spolsky says totally rewriting your code is the single worst mistake you can make). They went 2x over budget. That's what you get when rewriting completely.... The initial test with hand-written json config files went OK, so they went … -
Pygrunn keynote: Morepath under the hood - Martijn Faassen
(One of my summaries of the one-day 2016 PyGrunn conference). Martijn Faassen is well-known from lxml, zope, grok. Europython, Zope foundation. And he's written Morepath, a python web framework. Three subjects in this talk: Morepath implementation details. History of concepts in web frameworks Creativity in software development. Morepath implementation details. A framework with super powers ("it was the last to escape from the exploding planet Zope") Traversal. In the 1990's you'd have filesystem traversal. example.com/addresses/faassen would map to a file /webroot/addresses/faassen. In zope2 (1998) you had "traversal through an object tree. So root['addresses']['faassen'] in python. The advantage is that it is all python. The drawback is that every object needs to know how to render itself for the web. It is an example of creativity: how do we map filesystem traversal to objects?. In zope3 (2001) the goal was the zope2 object traversal, but with objects that don't need to know how to handle the web. A way of working called "component architecture" was invented to add traversal-capabilities to existing objects. It works, but as a developer you need to quite some configuration and registration. Creativity: "separation of concerns" and "lookups in a registry" Pyramid sits somewhere in between. And … -
Pygrunn keynote: the future of programming - Steven Pemberton
(One of my summaries of the one-day 2016 PyGrunn conference). Steven Pemberton (https://en.wikipedia.org/wiki/Steven_Pemberton) is one of the developers of ABC, a predecessor of python. He's a researcher at CWI in Amsterdam. It was the first non-military internet site in Europe in 1988 when the whole of Europe was still connected to the USA with a 64kb link. When designing ABC they were considered completely crazy because it was an interpreted language. Computers were slow at that time. But they knew about Moore's law. Computers would become much faster. At that time computers were very, very expensive. Programmers were basically free. Now it is the other way. Computers are basically free and programmers are very expensive. So, at that time, in the 1950s, programming languages were designed around the needs of the computer, not the programmer. Moore's law is still going strong. Despite many articles claiming its imminent demise. He heard the first one in 1977. Steven showed a graph of his own computers. It fits. On modern laptops, the CPU is hardly doing anything most of the time. So why use programming languages optimized for giving the CPU a rest? There's another cost. The more lines a program has, the … -
Caktus CTO Colin Copeland Invited to the White House Open Police Data Initiative
We at Caktus were incredibly proud when the White House Police Data Initiative invited CTO Colin Copeland to celebrate their first year accomplishments. While at the White House, Colin also joined private breakout sessions to share ideas with law enforcement officials, city staff, and other civic technologists from across the country. Colin is the co-founder of Code for Durham and served as lead developer for OpenDataPolicingNC.com. OpenDataPolicingNC.com, a site built for the Southern Coalition for Social Justice, displays North Carolina police stop data. -
Why using factories in Django
From the very beginning of a project, you need some data. You need data in your development database and you need data for your automated tests. The instinctive solution is to manually enter a set of data via the Django admin. The official way is to enter data via Django fixtures file(s). Using factories will make it easier and better; here is why. -
What We’re Clicking - April Link Roundup
It’s time for this month’s roundup of articles and posts shared by Cakti that drew the most attention on Twitter. The list highlights new work in civic tech and international development as well as reasons for the increasing popularity of Python and open source development. -
How to track Google Analytics pageviews on non-web requests (with Python)
tl;dr; Use raven's ThreadedRequestsHTTPTransport transport class to send Google Analytics pageview trackings asynchronously to Google Analytics to collect pageviews that aren't actually browser pages. We have an API on our Django site that was not designed from the ground up. We had a bunch of internal endpoints that were used by the website. So we simply exposed those as API endpoints that anybody can query. All we did was wrap certain parts carefully as to not expose private stuff and we wrote a simple web page where you can see a list of all the endpoints and what parameters are needed. Later we added auth-by-token. Now the problem we have is that we don't know which endpoints people use and, as equally important, which ones people don't use. If we had more stats we'd be able to confidently deprecate some (for easier maintanenace) and optimize some (to avoid resource overuse). Our first attempt was to use statsd to collect metrics and display those with graphite. But it just didn't work out. There are just too many different "keys". Basically, each endpoint (aka URL, aka URI) is a key. And if you include the query string parameters, the number of keys … -
Florida Open Debate Platform Receives National Attention (The Atlantic, USA Today, Engadget)
Several national publications have featured the Florida Open Debate platform, including USA Today, Engadget, and The Atlantic. Caktus helped develop the Django-based platform on behalf of the Open Debate Coalition (ODC) in advance of the nation’s first-ever open Senate debate held in Florida on April 25th. The site enabled citizens to submit debate questions as well as vote on which questions mattered most to them. Moderators then used the thirty most popular questions from the site to structure the debate between Florida Senate candidates David Jolly (R) and Alan Grayson (D). According to *The Atlantic, *more than 400,000 votes were submitted by users on the site, including more than 84,000 from Florida voters. -
ES6 For Django Lovers
ES6 for Django Lovers! The Django community is not one to fall to bitrot. Django supports every new release of Python at an impressive pace. Active Django websites are commonly updated to new releases quickly and we take pride in providing stable, predictable upgrade paths. -
Multi-table Inheritance and the Django Admin
Django's admin interface is a great way to be able to interact with your models without having to write any view code, and, within limits, it's useful in production too. However, it can quickly get very crowded when you register lots of models. Consider the situation where you are using Django's multi-table inheritance: {% highlight python %} from django.db import models from model_utils.managers import InheritanceManager class Sheep(models.Model): sheep_id = models.AutoField(primary_key=True) tag_id = models.CharField(max_length=32) date_of_birth = models.DateField() sire = models.ForeignKey('sheep.Ram', blank=True, null=True, related_name='progeny') dam = models.ForeignKey('sheep.Ewe', blank=True, null=True, related_name='progeny') objects = InheritanceManager() class Meta: verbose_name_plural = 'sheep' def __str__(self): return '{}: {}'.format(self._meta.verbose_name, self.tag_id) class Ram(Sheep): sheep = models.OneToOneField(parent_link=True) class Meta: verbose_name = 'ram' verbose_name_plural = 'rams' class Ewe(Sheep): sheep = models.OneToOneField(parent_link=True) class Meta: verbose_name = 'ewe' verbose_name_plural = 'ewes' {% endhighlight %} Ignore the fact there is no specialisation on those child models: in practice you'd normally have some. Also note that I've manually included the primary key, and the parent link fields. This has been done so that the actual columns in the database match, and in this case will all be `sheep_id`. This will make writing joins slightly simpler, and avoids the (not specific to Django) ORM anti-pattern of … -
(Directly) Testing Django Formsets
Django Forms are excellent: they offer a really nice API for validating user input. You can quite easily pass a dict of data instead of a `QueryDict`, which is what the request handling mechanism provides. This makes it trivial to write tests that exercise a given Form's validation directly. For instance: {% highlight python %} def test_my_form(self): form = MyForm({ 'foo': 'bar', 'baz': 'qux' }) self.assertFalse(form.is_valid()) self.assertTrue('foo' in form.errors) {% endhighlight %} Formsets are also really nice: they expose a neat way to update a group of homogenous objects. It's possible to pass a list of dicts to the formset for the `initial` argument, but, alas, you may not do the same for passing data. Instead, it needs to be structured as the `QueryDict` would be: {% highlight python %} def test_my_formset(self): formset = MyFormSet({ 'formset-INITIAL_FORMS': '0', 'formset-TOTAL_FORMS': '2', 'formset-0-foo': 'bar1', 'formset-0-baz': 'qux1', 'formset-1-foo': 'spam', 'formset-1-baz': 'eggs' }) self.assertTrue(formset.is_valid()) {% endhighlight %} This is fine if you only have a couple of forms in your formset, but it's a bit tiresome to have to put all of the prefixes, and is far noisier. Here's a nice little helper, that takes a `FormSet` class, and a list (of dicts), and instantiates … -
2016 DBIR Highlights
The 2016 edition of Verizon’s Data Breach Investigations Report is out, and as usual it’s compelling reading. The DBIR is one of the only sources of hard data about information security, which makes it a must-read for anyone trying to run a security program in a data-driven manner. What follows are the bits that I found especially interesting, and a bit of my own commentary. Internal threats are rare [T]he Actors in breaches are predominantly external.