Django community: RSS
This page, updated regularly, aggregates Community blog posts from the Django community.
-
Pycon.de: reinventing streamlit - Malte Klemm
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). He asked everyone who has used streamlit to stand up (75% stood). Then everyone who thought their dashboards were getting much too complex could sit down again. Only a few were left standing. In 2015 he started out with a streamlit precursor based on bokeh. Around 2020 streamlit came around and it quickly gained a lot of popularity. He still uses streamlit and likes it. It is simple and easy. But... people ask for more functionality. Or they add multiple pages. Then it slowly starts to break down. It doesn't fit the streamlit paradigm anymore. You can use @st.cache_data to speed it up a bit if you do an expensive calculation. @st.fragment limits the execution scope: changes within a fragment only trigger the fragment to re-run. After a while, the cache_data and fragment decorators are only band-aid on a bigger problem. It breaks down. He recently discovered https://reflex.dev/ . An open source framework to quickly build and deploy web apps. It is a pure python framework, so you don't have to write typescript. But the big difference with streamlit is that the code is explicitly divided in frontend … -
Pycon.de: serverless orchestration: exploring the future of workflow automation - Tim Bossenmaier
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). What is orchestration? Coordinated execution of multiple commputer systems, applications or services. It is more than automation. Some things you can think of: Containers/dockers can be managed. Coordinating multiple workflows/tasks. Syncronizing/managing two or more apps. Coordinating microservices, data services, networks, etc. You can run code on-prem: a physical server in your cellar or a data center. You can also rent servers from a cloud provider. Another level up is serverless: you pay the specific compute resources you have used. AWS lambda is an example of a serverless function, this popularized the serverless paradigm. Why would you combine them? Resilience: no orchestration tool to keep running. Cost efficiency: you only pay for what you use. Scalability: automatically handled. Some options: AWS step functions, azure logic apps, azure durable functions, google's gcp workflows. A drawback for all of them is that they take a no-code/low-code approach, allowing you to click/drag/drop your workflows in the browser. It is stored in json, so as a proper software developer you are limited to uploading the json with terraform or so. There are also open source solutions. Argo workflows, for instance. Drawback of those … -
Pycon.de: thursday lightning talks
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). Event anouncements PyData Rhein-Main, they're looking for speakers. https://barcamps.eu/python-barcamp-leipzig-2025, end of June, Leipzig (DE) EuroScipy 2025, August, Kraków (PL) Europython 2025, July, Prague (CZ) PythonCamp Rügen (DE), nicely on the Baltic coast, https://pythoncamp.de, September Swiss python summit, https://www.python-summit.ch/ , Rapperswill (CH) NumFocus is hiring a new executive director. You have a week to apply (until May 2). Note: I probably made errors with names and titles or missed them: live-blogging lightning talks is a tad of a challenge... Dimensional modeling is not dead - Miguel Dimensional modeling started around 1996. You probably use LLM, duckdb, mlflow, huggingface or whatever. You can use ye olde dimensional modeling to slay this complexity. You have a central "fact table" and "dimension tables" that augment it. Just databases. Simple queries, simple joins. Just (re-)read the old book by Ralph Kimball and Margy Ross: "the data warehouse toolkit". Messing up with AI - Emanuelle Fabbiani AI is the fastest growing technology. What can go wrong? Apparently you can cross the English Channel by foot in 32 hours. You should eat at least one rock per day. If you lack cheese for your pizza, … -
Pycon.de: a11y need is love (but accessible docs help too) - Smera Goel
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). a11y = AccessibilitY. Documention is an important part of every project. But what does it mean to write documentation for everyone? You can make a "maslov pyramid" for documentation. Accurate content and install instructions are the basic necessity. For many writers, accessibility is somewhere at the top: a nice cherry when you get around to it. But for some people it is a basic necessity. Accessibility means removing barriers. Making sure people can use what you build. And: with accessibility you often think about blind people or someone without an arm. But if you solve a problem for the arm-less person, you also solve it for someone who broke their arm. Or someone holding a baby. Common accessibility problems in docs: Low contrast text. Poor heading structure. Unlabeled buttons/links. No visible focus indicators. Every one of those problems adds some friction for everyone. And... docs are often read when there's pressure to fix something, so any fiction is bad. Now, how do you determine if your docs are accessible? An audit can help. It can be manual or automated or a mix. There are plenty of free tools: … -
Pycon.de: fastapi and oauth2 - Semona Igama
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). Full title: safeguard your precious API endpoints built on fastapi using OAuth 2.0. She introduced herself by showing an openid oauth2 access token payload :-) Several big companies wanted a way to have people log in more securely into their services. Originally, you'd use a username/password everywhere. They came up with oauth: a way to securely logging in on a website using an identity from an identity provider ("logging into a different website with your google account"). Oauth2 is a generic mechanism for authorization. OpenID builds upon oauth2 and provides authentication. Note: oauth 2.1 is under development, they will incorporate pkce. pkce is used by openid, so they'll mandate 2.1 once it is ready. It is handy for authentication from the frontend (on the frontend, you cannot store private secrets, so a priv/pub mechanism isn't usable). Fastapi has a HTTPBearer scheme, which extracts a "bearer" token from the Bearer header. You can use this for oauth2. (She showed some example code that I of course couldn't type over :-) Plus a demo.) Look at RFC 9700 "best current practice for OAuth 2.0 security". Photo explanation: picture from our … -
Pycon.de keynote: machine learning models in a dynamic environment - Isabel Drost-Fromm
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). When the web started, you had a few pages. After a while you couldn't remember all the URLs anymore, so curated overview pages cropped up. Then even more pages, so the first automated search engines started appearing. And thus "keyword stuffing" as search engines only looked at the manual keywords. So search engines started looking at links between pages and sites. So people started gaming that system, too... Same with email. With email came spam. And thus automated mail filtering. And spammers adjusting to it. And spam filters adjusting in turn. And on and on. A cat and mouse game. Not everyone in the audience remembered this cat and mouse game with search engines and spam. If you have a security mechanism, you can expect the mechanism being attacked. A virus scanner can be used to attack the system it protects... She once saw a quote from Harold Innes, 1952: "it should be clear that improvements in communication tends to divide mankind". For example the invention of the printing press. Soon afterwards you had someone named Luther and a split in the church and some wars, for instance... … -
Pycon.de: boosted application performance with redis and client-side caching - David Maier
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). Full title: cache me if you can: boosted application performance with redis and client-side caching. Redis can be used: As an in-memory database. As a cache. For data streaming. As a message broker. Even some vector database functionality. Redis develops client libaries, for instance redis-py for python. Though you probably use redis through some web framework integration or so. Why cache your data? Well, performance, scalability, speed. Instead of scaling your database, for instance, you can also put some caching in front of it (and scale the cache instead). Caching patterns built into redis: "Look-aside": app reads from cache and if there's a miss, it looks in the actual data source instead. "Change data capture": the app reads from the cache and writes to the data source. Upon a change, the data source writes to the cache. Redis has the regular cache features like expiration (time to live, expiration keys, etc), eviction (explixitly removing known stale items), LRU/LFU (least recently used, least frequently used). Redis behaves like a key/value store. Why client side caching? Again performance and scalability. Some items/keys in redis are accessed much more often than … -
Pycon.de: distributed file-systems made easy with python's fsspec - Barak Amar
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). Barak is a founding engineer at lakeFS. Local storage is simple and easy. But cloud storage is real handy: scalability, security, etc. But the flexibility is a problem: every cloud storage service introduced its own way of working with it. Slightly different APIs. fsspec is a python library providing a unified interface for interacting with various storage systems, local and remote. The goal is to make remote systems work as local ones. Your python code talks to the "fsspec unified interface", which accepts a file system identifier/type (like s3), which activates that filesystem functionality. And then a file path within that type of filesystem. Why fsspec? It simplifies your code. Consistency. Enhanced capabilities. Ecosystem integration. Extensible and open source. fsspec implements the standard python .read(), .write(), .glob() etc. And also .seek(...), which you can use to do range requests, something you'd have to do yourself with s3/boto otherwise. Pandas can read files from s3 and so, but it needs libraries for it. You can use fsspec and pass the file pointer to pandas. fsspec has some additional capabilities. For instance caching. By prepending simplecache:: in front of the … -
Pycon.de: wednesday lightning talks
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). Event announcements Pycon Italy, Bologna, https://www.pycon.it Pydata Berlin, 1-3 September, https://pydata.org/berlin2025 Euroscipy, August, Kraków, Poland, https://euroscipy.org/ Paris, 31 Sept/1 October. Humble data workshop https://humbledata.org/ Python barcamp. Much smaller than pycon.de. February or March 2026 in Karlsruhe (DE). Freelancer barcamp Hamburg (DE), 26 July 2025. Meta announcement: https://pythondeadlin.es/ , the biggest overview of python conferences. Note: I probably made errors with names and titles or missed them: live-blogging lightning talks is a tad of a challenge... Do not fear AI - John Roberts There's AI everywhere. AI will destroy the world. AI will destroy jobs. Help! He made https://dont-fear-ai.com, where he explains AI concepts, shows building AI projects, discusses AI misconceptions. ESOC, European summer of code - Frank Do you want to... meet new friends? Work on exciting open source and AI? Have fun with flags? He introduced https://www.esoc.de, the European summer of code. The first European open source program. Stipends for contributors new to open source. Developer support. Project support. Package management, PEP 751 - Nico Albers You can start with basic requirements, just a list in requirements.txt. You can improve on that by having compiled requirements, so … -
Pycon.de keynote: reasonable AI - Kristian Kersting
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). Kristian Kersting is professor of AI in Germany. (And some other things, he started hiding behind a banner halfway the introduction :-) He's got a central question in his talk: does size really matter in AI? Is there something like "reasonable artificial intelligence"? AI is not just data. AI is not just a model. He thinks it is more a "cooking recipe" for learning, for reading, for composing. When used responsibly, AI has the potential to help tackle some of the most pressing challenges across multiple domains. But... partially this potential seems to be due to bigger is better. The "scaling hypothesis". He showed some German initiatives to have German-language LLMs. And also some European initiatives, as the Big Tech monoculture is a problem due to high costs and lack of expertise. According to him, AI depends on open source. He also mentioned that it doesn't have to be just isolated European: we should co-operate world wide. A quote he showed: "with petabytes of data, you could say correlation is enough, we can stop looking for models". AI is exciting and bigger sometimes is better, but unfortunately scaling … -
Pycon.de: python performance unleashed - Thomas Berger
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). The talk is about just the standard library: in how to use it better to speed up your python code. So not about different interpreters or about using external libraries. Python is not known as a speedy language, but actually its speed is continuously going up. 3.13 has twice the speed of 2.7, so the first tip is to simply upgrade your python version. No optimisation talk is complete without the Donald Knuth quote about "premature optimisation is the root of all evil". Don't spend all your time on worrying about efficiency in the wrong places or at the wrong time. Where to optimise? Well, profile. %timit for quick micro-benchmarking (see the docs), cprofile for function-level profiling. Always measure before you optimise. cprofile is a module in the standard library, you use it like cProfile.run("your_function()"). You then get a list of all the behind-the-scenes calls plus their duration. After profiling, you need to identify the actual problem. Is it a CPU problem? Or is it memory-related or an IO problem? Afterwards, you can optimise it. But always consider if it is actually worth optimising, see the XKCD comic. … -
Pycon.de: open table formats - Franz Wöllert
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). Open table formats are something that changed the data landscape for the better in the last few years. It all begins with data. All our advancements as human beings. Data fuels knowledge creation. What would we do without books, for instance? He works for a company that builds big printing machines (Heidelberg Druckmaschinen). Those machines produce lots of monitoring data. In 2015 they used spark, hadoop and cassandra as big data platform, which was state-of-the-art at the time. But... they're hoping to finally shut it down this year. It was expensive, difficult to maintain and limited in scalability (at least in the way they set it up). They started using the cloud. AWS, google cloud and azure promise a lot. Scalability, servers optimised for different use-cases, etcetera. But moving from your own hadoop instance to the cloud isn't easy. Snowflake and databricks are data platform giants that promise to take a lot of this kind of work off your hands. One of the apache techniques they still use is apache parquet because it has strict types (int, string, bool, float). Those strict types help a lot with testing. … -
Pycon.de: Guiding data minds: how mentoring transforms careers for both sides - Anastasia Karavdina
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). Why do you need a mentor when you have a manager? Well, not all managers have that much experience or technical knowledge. And... they might be to busy fighting their own imposter syndrome :-) And especially they have different goals and objectives. So we might need someone else to help us further in our careers. Yeah: lifelong learning. There's lots of innovation. Breadth (new tools) and depth (new algorithms, more details). We need more mentors in tech. Her goal with this talk: becoming a mentor is not so scary and has advantages for yourself. Definition: mentoring is a professional relationship where a more experienced person (the mentor) supports the growth, development and success of a less experienced person (the mentee). As a mentor you don't need to know everything, but with your experience you can often help and guide anyway. Advantages for a mentee include clarity on your direction, some acceleration of your learning, etc. But the most important one is probably that someone monitors your development, this makes your improvement/learning process more explicit. Mentorship can take two forms. It depends on the people and the situation which … -
Pycon.de: spherical geometries in spherely and geopandas - Joris Van den Bossche
(One of my summaries of the 2025 pycon.de conference in Darmstadt, DE). The earth is no longer flat, at least in the spherely/geopandas world :-) In shapely and geopandas, latitude/longitude are normally treated as rectangular x/y coordinates. This is incorrect as the earth is round. So distances you calculate will be wrong. A solution is to work with projected coordinates. So if you use a local projection (like "Lambert 72" for Belgium), your calculations will be perfectly fine within that region. The projected "flat" coordinates will be good enough. Why don't we use projected coordinates for everything? Well, because it isn't possible. You get distortion, like the regular "mercator" projection that dramatically enlarges Greenland, for instance. Data that's near the poles also gets funny. Oh, and if you have data on Fiji, which is around 180 degrees longitude, your data might be projected partially on the left and partially on the right side of the map... Well, what is the shape of the earth? You can treat it as a perfect sphere. That's a useful approximation, which is also used in geopandas. But, yes, it is an approximation: the earth is actually more a spheroid. But a perfect sphere is … -
Django News - Django 5.2 Fixes, Wagtail Updates & GeoDjango Mapping - Apr 18th 2025
News PyPI: Incident Report: Organizations Team privileges PyPI resolved an issue where organization team privileges persisted after user removal by swiftly deploying a security patch and thoroughly auditing role assignments. pypi.org Django Software Foundation DSF member of the month - Öykü Gümüş Recognizing experienced Django developer Öykü Gümüş for leadership, mentoring, and innovative work with GraphQL and enhanced async support in Django. djangoproject.com Updates to Django Today 'Updates to Django' is presented by Abigail Afi Gbadago from the DSF Board and Djangonaut Space!🚀 Last week we had 10 pull requests merged into Django by 6 different contributors 🎉 This week’s Django highlights 🌟 A regression in Djando 5.2 where the select_for_update(of) crash that occurs when using values()/values_list has been fixed. Overwritten file contents are now truncated in file_move_safe. The values_list method now ensures that duplicate field name references are assigned unique aliases. This maintains the behavior from before Django 5.2. Django Newsletter Wagtail CMS What's new in Wagtail CMS - May 2025 We'll share features in the latest Wagtail 6.3 and 6.4, release, new updates, future features and more! wagtail.org Sponsored Link 1 Ready to get your Django project to the next level? Elevate your Django projects with HackSoft! Try … -
Coding with LLMs - Frank Wiles
Frank Wiles personal site RevSys - Django Consultancykube-anypod kube-secrets DjangoCon US 2024: Brief History of Django with Frank Wiles Django 6.x Steering Council Aider AIClaude Django for APIs, 5th Edition SponsorThis episode was brought to you by HackSoft, your development partner beyond code. From custom software development to consulting, team augmentation, or opening an office in Bulgaria, they’re ready to take your Django project to the next level! -
You probably don’t need a CMS
Many people quickly reach for a big CMS package for Django, when often this is overkill. Here’s how to use a simple Django model with a CKEditor 5 WYSIWYG field, including embedded media like YouTube. -
Customizing Django admin fieldsets without fearing forgotten fields
Customizing Django admin fieldsets without fearing forgotten fields When defining fieldsets on Django modeladmin classes I always worry that I forget updating the fieldsets later when adding or removing new model fields, and not without reason: It has already happened to me several times. Forgetting to remove fields is mostly fine because system checks will complain about it, forgetting to add fields may be real bad. A recent example was a crashing website because a required field was missing from the admin and therefore was left empty when creating new instances! I have now published another Django package which solves this by adding support for specifying the special "__remaining__" field in a fieldsets definition. The "__remaining__" placeholder is automatically replaced by all model fields which haven’t been explicitly added already or added to exclude1. Here’s a short example for a modeladmin definition using django-auto-admin-fieldsets: from django.contrib import admin from django_auto_admin_fieldsets.admin import AutoFieldsetsModelAdmin from app import models @admin.register(models.MyModel) class MyModelAdmin(AutoFieldsetsModelAdmin): # Define fieldsets as usual with a placeholder fieldsets = [ ("Basic Information", {"fields": ["title", "slug"]}), ("Content", {"fields": ["__remaining__"]}), ] I have used Claude Code a lot for the code and the package, and as always, I had to fix bugs … -
Django News - Python 3.14.0a7 and every Python now available - Apr 11th 2025
News Six Python releases (3.9 to 3.13) and a new 3.14.0a7 are now available Not one, not two, not three, not four, not five, but six releases! Is this the most in a single day? blogspot.com Annual meeting of DSF Members at DjangoCon Europe DSF annual meeting at DjangoCon Europe enables community discussions on current and future projects, engaging both in-person and remote DSF members. djangoproject.com PEP 750 – Template Strings PEP 750 Template Strings (t-strings) were accepted and will be added to Python 3.14. python.org Updates to Django Today 'Updates to Django' is presented by Abigail Afi Gbadago from the DSF Board and Djangonaut Space!🚀 Last week we had 21 pull requests merged into Django by 13 different contributors - including 4 first-time contributors! Congratulations to 송준호, gtossou🚀, Kelvin Adigwu🚀 and bbkfhq for having their first commits merged into Django - welcome on board!🎉 This week’s Django highlights: Field selection on QuerySet.alias() after values() has been prevented from adding an aliased value to the result set. IndexError crash when annotating an aggregate function over a group has been fixed. Tuples containing None are now discarded during lookups. Serialization support has been added for ZoneInfo objects in migrations. Setuptools have … -
Goodbye JourneyInbox - Building SaaS #218
In this episode, I declared to the stream that I’m done working on JourneyInbox as a SaaS product. I didn’t see any meaningful market adoption, so I’ve decided to pivot the project to serve only my personal needs. I used the stream to do a retrospective on the project and then convert the core logic to use Go to simplify when I need to run on my server. -
Maps with Django⁽³⁾: GeoDjango, Pillow & GPS
A quick-start guide to create a web map with images, using the Python-based Django web framework, leveraging its GeoDjango module, and Pillow, the Python imaging library, to extract GPS information from images. -
PySpark 101: Introduction to Big Data with Spark
Unlock the PySpark for Big Data. This is a beginner-friendly course designed to introduce you to Apache Spark, a fast and scalable distributed computing framework. This class covers the fundamentals of PySpark, including: -
Running Background Tasks from Django Admin with Celery
This tutorial looks at how to run background tasks directly from Django admin using Celery. -
Weeknotes (2025 week 15)
Weeknotes (2025 week 15) Djangonaut Space We have already reached the final week of the Djangonaut Space session 4. I had a great time as a navigator and am looking forward to participate more, but for now I’m also glad that I do not have the additional responsibility at least for the close future. We have done great work on the django-debug-toolbar in our group, more is to come. Progress on the prose editor I have done much work on django-prose-editor in the last few weeks and after a large list of alphas and betas I’m nearing a state which I want to release into the wild. The integration has been completely rethought (again) and now uses JavaScript modules and importmaps. The ground work to support all of that in Django has been laid in django-js-asset. The nice thing about using JavaScript modules and importmaps is that we now have an easy way to combine the power of modern JavaScript customization with easy cache busting using Django’s ManifestStaticFilesStorage. A longer post on this is brewing and I hope to have it ready soon-ish. As a sneak peek, here’s the way it works: from django_prose_editor.fields import ProseEditorField content = ProseEditorField( extensions={ … -
Tips for Tracking Django Model Changes with django-pghistory
Django and its admin interface are a big part of why Caktus uses Django, but the admin's ability to log database changes is limited. For example, it shows only changes made via the Django admin, not via other parts of the site. We've written previously on the Caktus blog about django-simple-history, a tool we use to track model changes in the admin and other parts of our Django projects. django-simple-history works well for some cases, but as a Python solution, it is not able to track changes made directly in the database with raw SQL. Over the last year, we've been using yet another tool, django-pghistory, to track data changes in Postgres tables with 5+ million records, so I thought I'd write a short post with some of the things we've learned over this time. Track changes selectively django-pghistory works using Postgres triggers, which are a great solution for tracking and recording changes at a low level in the database (no matter what initiated the changes). That said, there are two caveats to this approach which are worth noting: The triggers need to be removed and re-added during schema changes. django-pghistory handles this for you, however, we found it makes …