Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
How to aggregate hierarchical data efficiently in Django without causing N+1 queries?
I’m working with a hierarchical model structure in Django, where each level can represent a region, district, or village. The structure looks like this: class Location(models.Model): name = models.CharField(max_length=255) parent = models.ForeignKey( 'self', on_delete=models.CASCADE, related_name='children', null=True, blank=True ) def __str__(self): return self.name Each Location can have child locations (for example: Region → District → Village). I also have a model that connects each location to a measurement point: class LocationPoint(models.Model): location = models.ForeignKey(Location, on_delete=models.CASCADE) point = models.ForeignKey('Point', on_delete=models.DO_NOTHING, db_constraint=False) And a model that stores daily or hourly measurement values: import uuid class Value(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) point = models.ForeignKey('Point', on_delete=models.DO_NOTHING, db_constraint=False) volume = models.FloatField(default=0) timestamp = models.DateTimeField() Goal: I want to aggregate values (e.g., total volume) for each top-level region, including all nested child levels (districts, villages, etc.). Example: Region A → Total Volume: 10,000 Region B → Total Volume: 20,000 Problem: When I try to calculate these sums recursively (looping over children and summing their related Value records), the number of database queries increases dramatically — a classic N+1 query problem. Question: How can I efficiently compute aggregated values across a hierarchical model in Django — for example, summing all Value.volume fields for every descendant location — … -
Deploy Django and Nginx under subpath
I'm trying to deploy a Django app with Gunicorn and Nginx under a subpath, I'm inside a corporate network, and the path www.example.com/myapp points to the IP 192.168.192.77:8080 of my PC on the local network (I have no control over the pathing nor the corporate network, just that port exposed to the internet through /myapp). I tried many things including this: How to host a Django project in a subpath? , but it doesn't show the Django welcome page, just the Nginx welcome page. I also can't access to the Django admin page that should be on the path /myapp/admin, just a 404 page. This is the config of my site on the folder sites-available for Nginx: server { listen 8080; server_name 192.168.192.77; location /myapp/static/ { root /home/user/myapp; } location /myapp/ { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } } I tried proxy_set_header SCRIPT_NAME /myapp; but it didn't work. If I don't configure any paths, it shows the django welcome page at /myapp but then I can't acces /myapp/admin, also a 404. Curiously, if I start the Django development server using python manage.py runserver without nginx it works, the django welcome page shows at /myapp and I can access /myapp/admin with the … -
Django transaction.atomic() on single operation prevents race conditions?
Why I need to use atomic() when I have only 1 db operation inside atomic block? My AI-assistant tells me that it prevents race conditions, but I don't use select_for_update() inside. It tells that db looks on unique constraints and sets lock automatically but only when I use atomic(), but if I will use it without atomic() race conditions can be happened. Is it true? Can you explain this behaviour? I don't understand how it works if I have only one db operation inside. Code example: with atomic(): Model.objects.create(....) -
can't find xgettext or msguniq but gettext-base is installed
As part of a django project, I need to build translation *.po files, but I have the error CommandError: Can't find xgettext. Make sure you have GNU gettext tools 0.19 or newer installed. when I run django-admin makemessages -a and CommandError: Can't find msguniq. Make sure you have GNU gettext tools 0.19 or newer installed. when I run django-admin makemessages -l en. I see that what is missing is supposed to come from the os and I run Ubuntu 25.04. So I tried to run xgettext and msguniq on their own. Each time I get Command 'xgettext' not found, but can be installed with: sudo apt install gettext So I tried doing just that but apt fails with Error: Unable to locate package gettext. However when I try to run gettext -V I do have gettext v.0.23.1 installed. It seems to come from package gettext-base that is indeed installed but can't seem to be used. I searched this over the internet but can't seem to find anything helpful. I don't know if it is necessary but I do have python-gettext installed in my python venv also. Any idea how to make python find gettext in this situation? -
Encoding full payload and decoding in server in REST
Issue WAF is showing some errors due to including some HTML tags in my payload responses (mostly field-like messages and user guides). Sometimes, I am also sending R programming language code to the server, which will just be stored in the database. While doing WAF for security check, it gives a vulnerability issue saying HTML tags and code are detected. My current Solution So, our team proposed a solution to encode the entire payload and decode the encoded payload in the Django middleware. But I am wondering if this is the best approach after all? Validation and Question Will this approach be efficient in the long run? If you have faced same issue, can you please suggest the right approach? Thank You -
Deploying Dockerized (React + Django + PostgreSQL ) app with custom license to a client without exposing source code
I am running a test simulation on a virtual server in VirtualBox to see how the procedure of installing a web application using Docker would work on a client server. My stack includes: Frontend: React.js, built into a Docker image Backend: Django (Python) in Docker Database: PostgreSQL 16 in Docker Orchestration: Docker Compose managing all services Environment variables: Managed via .env.docker for the backend (database credentials, email settings, etc.) and for the frontend at build time (API URL) License: A custom license mechanism I implemented myself, which must be included and validated on the client server using license.json as the key sold to clients In my test: I built the backend and frontend Docker images locally on my development machine. For the frontend, I rebuilt the image with REACT_APP_API_URL=http://localhost:8000 so that it points to the local backend. I exported the backend and frontend images as .tar files to simulate distribution to a client server. On the client server (virtual machine), I loaded the images and tried running them using Docker Compose. I observed that if the frontend API URL is not baked in at build time, React requests go to undefined/users/.... Question: For a real client deployment using this stack, … -
How to Avoid JWT Collision While Receiving Bearer Token
I am doing a Django project where I am using JWT token for authentication. But the problem is that two different JWT tokens are both valid with the same signature that is provided in the backend with slight variation. What is the reason? I also tried the implementation in FastAPI using PyJWT the result was kind a same where two different tokens were accepted by the backend server. Valid Token from Backend With c at the end Other Forged Correct Tokens With d at the end With e at the end Other Forged Incorrect Tokens With b at the end With g at the end -
Can't insert rows into Supabase profile table even after creating the RLS policy to do so for the sign up feature
Again, I am quite new to Supabase so I apologize in advance if I don't provide clear details in this post or mess up with some terms or something Basically, I am doing auth using Supabase and have this table called "profiles" with columns: id - UUID username - text email - text now when I create a new account using Supabase, it works, the account gets registered and shows up in the auth tab, but the new row doesn't get inserted into profiles? user = response.user if user: resp = supabase.table("profiles").insert({ "id": user.id, "username": username, "email": email }).execute() print(resp) request.session["user_id"] = user.id request.session["username"] = username return redirect("home") Now, my RLS for the profiles table is: Enable insert for authenticated users only, INSERT, anon, authenticated and I am using a service key to create the supabase client. Even after all that, I keep getting the error -> APIError: {'message': 'new row violates row-level security policy for table "profiles"', 'code': '42501', ...} PLEASE HELP ME I HAVE NO IDEA HOW TO FIX THIS, I almost let AI take over my code atp but nahh I'm not that desperate 💔 -
Is it possible to force mysql server authentication using django.db.backends.mysql?
it's my first question on stack overflow because I can't find relevant information in Django documentation. Is it possible to force mysql server authentication with ssl using django.db.backends.mysql? I have checked its implementation in Django Github and it seems it supports only 3 ssl arguments: ca, cert and key. What I need is equivalent of --ssl-mode=VERIFY_IDENTITY. Has anyone found some workaround for this problem? Here is my current configuration. TLS channel is working as expected, but identity of MySQL server is not validated. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': env('DB_NAME'), 'USER': env('DB_USER'), 'PASSWORD': env('DB_PASSWORD'), 'HOST': env('DB_HOST'), 'PORT': env('DB_PORT'), 'CONN_MAX_AGE': 600, 'OPTIONS':{ 'ssl':{ 'ca': env('CA_CERT'), 'cert': env('CERT'), 'key': env('KEY') } } } } -
How to reuse a Django model for multiple relationships
I want to make a task model and a user model. And I want each task to be able to be related to 3 users. Each task should be related to a creator user, an assignee user, and a verifier user. And I want to only have one user table. My inclination is to have 3 foreign keys on the task table: creator_id, assignee_id, and verifier_id. Is this the correct way to do it? How do I model that in Django? -
How to aggregate a group by query in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(high) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
How to aggregate a group by query in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(high) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
How to integrate JWT authentication with HttpOnly cookies in a Django project that already uses sessions, while keeping roles and permissions unified?
I currently have a monolithic Django project that uses Django’s session-based authentication system for traditional views (login_required, session middleware, etc.). Recently, I’ve added a new application within the same project (also under the same templates directory) that communicates with the backend via REST APIs (Django REST Framework) and uses JWT authentication with HttpOnly cookies. The goal is for both parts (the old and the new) to coexist: The legacy sections should continue working with regular session-based authentication. The new app should use JWT authentication to access protected APIs. The problem I’m facing is how to properly handle permissions and roles across both authentication systems (sessions and JWT) without duplicating logic or breaking compatibility. Here’s what I want to achieve: Roles and permissions (e.g., X, Y, Z) should be defined centrally in the backend (either using Django Groups or a custom Role model). On the backend, traditional views should use @login_required, while API views should use JWTAuthentication with custom permission classes. On the frontend, I want to show or hide sections, submenus, or information depending on the authenticated user’s roles and permissions. (How can I properly integrate this?) All of this must work within the same Django project and the same … -
How to aggregate a group by queryset in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(high) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
Changing Django Model Field for Hypothesis Generation
I'm testing the generation of an XML file, but it needs to conform to an encoding. I'd like to be able to simple call st.from_model(ExampleModel) and the text fields conform to this encoding without needing to go over each and every text field. Something like: register_field_strategy(models.CharField, st.text(alphabet=st.characters(blacklist_categories=["C", "S"], blacklist_characters=['&']))) I searched around but didn't find anything which could do this. Anyone have any advice on how to make something like this works? -
Ligne sur Postgres invisible depuis l'ORM Django alors que visible depuis le Shell [closed]
Bonjour, Je rencontre un comportement incompréhensible entre le Shell Django (python manage.py shell) et le code exécuté via le serveur (python manage.py runserver). La requête ci-dessous renvoie un queryset vide depuis l'ORM Django, alors que depuis le Shell (ou PGAdmin avec une requête SQL brute), on a bien un retour non null : DossierValideur.objects.filter(id_dossier_id=185).values_list("id_instructeur", flat=True) C'est le seul id_dossier_id qui pose problème car cette même requête passe pour tous les autres. Contexte J’utilise Django + PostgreSQL avec plusieurs schémas (public, avis, documents, instruction, utilisateurs). Configuration : Django 5.1.7, PostgreSQL 15, Python 3.12, OS : Windows Dans mon settings.py : DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': os.environ.get('BDD_NAME'), 'USER': os.environ.get('BDD_USER'), 'PASSWORD': os.environ.get('BDD_PASSWORD'), 'HOST': os.environ.get('BDD_HOSTNAME'), 'PORT': os.environ.get('BDD_PORT'), 'OPTIONS': { 'options': '-c search_path=public,avis,documents,instruction,utilisateurs' }, } } Modèles concernés Mes 2 modèles concernés (base identique mais schémas différents) : class Instructeur(models.Model): id = models.AutoField(primary_key=True) id_ds = models.CharField(unique=True, blank=True, null=True) email = models.CharField(unique=True) id_agent_autorisations = models.ForeignKey(AgentAutorisations, models.RESTRICT, db_column='id_agent_autorisations') class Meta: managed = False db_table = '"utilisateurs"."instructeur"' def __str__(self): if self.id_agent_autorisations : return f"{self.id_agent_autorisations.nom} {self.id_agent_autorisations.prenom}" else : return self.email class Dossier(models.Model): id = models.AutoField(primary_key=True) id_ds = models.CharField(unique=True, blank=True, null=True) id_etat_dossier = models.ForeignKey(EtatDossier, models.RESTRICT, db_column='id_etat_dossier') id_etape_dossier = models.ForeignKey(EtapeDossier, models.RESTRICT, db_column='id_etape_dossier', default=10) numero = models.IntegerField(unique=True) date_depot = … -
How can I set a fixed iframe height for custom preview sizes in Wagtail’s page preview?
I’m extending Wagtail’s built-in preview sizes to include some additional device configurations. Wagtail natively supports device_width, but not device_height. I’d like to define both width and height for the preview iframe instead of having it default to height: 100%. Here’s an example of my mixin that extends the default preview sizes: from wagtail.models import Page, PreviewableMixin from django.utils.translation import gettext_lazy as _ class ExtendedPreviewSizesMixin(PreviewableMixin): """Extend the default Wagtail preview sizes without replacing them.""" @property def preview_sizes(self): base_sizes = super().preview_sizes extra_sizes = [ { "name": "12_inch", "icon": "hmi-12", "device_width": 1280, "device_height": 800, # not supported by Wagtail by default "label": _("Preview in 12-inch screen"), }, { "name": "24_inch", "icon": "hmi-24", "device_width": 1920, "label": _("Preview in 24-inch screen"), }, ] return base_sizes + extra_sizes @property def preview_modes(self): base_modes = super().preview_modes extra_modes = [ ("custom", _("Custom Preview")), ("custom_with_list", _("Custom Preview with List")), ] return base_modes + extra_modes def get_preview_template(self, request, mode_name): if mode_name == "custom": return "previews/preview_custom.html" if mode_name == "custom_with_list": return "previews/preview_custom_with_list.html" return "previews/default_preview.html" By default, Wagtail sets the preview iframe width using: width: calc(var(--preview-iframe-width) * var(--preview-width-ratio)); There doesn’t seem to be an equivalent variable for height. Question: Is there a way to set a fixed iframe height for custom preview sizes … -
How to enable bulk delete in Wagtail Admin (Wagtail 2.1.1, Django 2.2.28)
I’m currently working on a project using Wagtail 2.1.1 and Django 2.2.28. In Django Admin, there’s a built-in bulk delete action that allows selecting multiple records and deleting them at once. However, in Wagtail Admin, this functionality doesn’t seem to exist in my current version. I want to implement a bulk delete feature for one of my custom models (not a snippet), similar to how Django Admin provides it. Here’s my model example: class membership(address): user = models.OneToOneField(m3_account, on_delete=models.CASCADE, unique=True) locality_site = models.ForeignKey(wagtailSite, null=True, on_delete=models.CASCADE) # Contact details: business_name = models.CharField(max_length=100, verbose_name='Business Name/Account Name') contact_name = models.CharField(max_length=100) phone = models.CharField(max_length=70, verbose_name="Phone Number") def __str__(self): return self.business_name Before I start writing custom logic for this, I’d like to know: Does Wagtail support bulk delete functionality in newer versions of the admin interface? If yes, from which version was it officially introduced or supported? Would upgrading from Wagtail 2.1.1 to a certain version allow me to use this feature directly (without registering my model as a snippet)? I want to keep using Wagtail’s admin interface (ModelAdmin) and not convert my model into a snippet. Any guidance on compatible versions or recommended upgrade paths would be appreciated. -
Django-Oscar: UserAddressForm override in oscar fork doesn't work
I need to override UserAddress model. My Steps: Make address app fork python manage.py oscar_fork_app address oscar_fork Override model from django.db import models from django.conf import settings from django.utils.translation import gettext_lazy as _ AUTH_USER_MODEL = getattr(settings, "AUTH_USER_MODEL", "auth.User") class Address(models.Model): city = models.CharField("city ", max_length=255, blank=False) street = models.CharField("street ", max_length=255, blank=False) house = models.CharField("house ", max_length=30, blank=False) apartment = models.CharField("apartment ", max_length=30, blank=True) comment = models.CharField("comment ", max_length=255, blank=True) class UserAddress(Address): user = models.ForeignKey( AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="addresses", verbose_name=_("User"), ) from oscar.apps.address.models import * But I came across an error during makemigrations: django.core.exceptions.FieldError: Unknown field(s) (line1, phone_number, line4, notes, state, postcode, last_name, line2, country, line3, first_name) specified for UserAddress I tried to override UserAddressForm too: from django import forms from .models import UserAddress class UserAddressForm(forms.ModelForm): class Meta: model = UserAddress fields = [ "city", "street", "house", "apartment", "comment", ] But it doesn't work. What am I doing wrong? -
Django ORM fails to generate valid sql for JSONb contains
lets start with the error first from my logs: 2025-10-21 19:18:11,380 ERROR api.services.observium_port_status_service Error getting port status from store: invalid input syntax for type json LINE 1: ..." WHERE "api_jsonstatestore"."filter_key_json" @> '''{"type"... ^ DETAIL: Token "'" is invalid. CONTEXT: JSON data, line 1: '... and the query: state = ( JsonStateStore.objects.select_for_update() .filter(filter_key_json__contains={"type": "observium_port_status", "observium_port_id": observium_port_id}) .first() ) here is an example record id created touched filter_key_json data_json 33 2025-10-21 18:19:59.873 -0500 2025-10-21 18:44:57.047 -0500 {"type": "observium_port_status", "observium_port_id": 987} redacted and the model: class JsonStateStore(models.Model): created = models.DateTimeField(auto_now_add=True) touched = models.DateTimeField(auto_now=True) filter_key_json = models.JSONField() data_json = models.JSONField() class Meta: verbose_name = "JSON State Store" verbose_name_plural = "JSON State Stores" def __str__(self): return f"JsonStateStore(key={self.filter_key_json}, created={self.created}, touched={self.touched})" def save(self, *args, **kwargs): self.touched = timezone.now() super().save(*args, **kwargs) I am on django 4.2.2 my database backend is timescale.db.backends.postgis (github | pypi) and i am on version 0.2.13 of that package I cannot identify any syntax error in the QuerySet call and I cannot figure out what is going wrong here My current workaround is lock_sql = ( """ SELECT id, created, touched, filter_key_json, data_json FROM api_jsonstatestore WHERE filter_key_json @> %s::jsonb ORDER BY id ASC LIMIT 1 FOR UPDATE """ ) payload = {"type": "observium_port_status", "observium_port_id": … -
Tailwind CSS 4 and DaisyUI - Menu Items stacking vertically
Stack = Django, PostgreSQL, TailwindCSS 4 using Django-Tailwind (DaisyUI plugin) and Vanilla JavaScript My menu items for desktop (lg screens and above) on the second row are stacking vertically. I don't understand why they aren't horizontal. <div class="hidden lg:flex justify-center w-full mt-2"> <ul class="menu menu-horizontal flex-row px-1"> <li><a href="#">About</a></li> <li><a href="#">Shop</a></li> <li><a href="#">Blog</a></li> </ul> </div> This is the second part of the header and needs to be underneath the search bar on lg screens and above. I am using the DaisyUI TailwindCSS 4 navbar and menu components for the header. In my style.css I have @import "tailwindcss"; @plugin "@tailwindcss/forms"; @plugin "@tailwindcss/typography"; @plugin "daisyui"; At the top so I know DaisyUI is installed. I am on DaisyUI 5.0.43 according to my package.json -
React Native Maps not showing Markers on Android, even though API data is fetched correctly
I'm building a React Native app to display location markers on a map using react-native-maps. I'm using the Google Maps provider on Android. My problem is that the map loads, but the markers are not visible, even though I can confirm that my API call is successful and returns a valid array of location data. MapsScreen.jsx:- import React, { useState, useEffect, useRef, useMemo } from "react"; import { View, Text, StyleSheet, ActivityIndicator, Alert, SafeAreaView, TouchableOpacity, StatusBar, Modal, } from "react-native"; import MapView, { Marker, Callout, PROVIDER_GOOGLE } from "react-native-maps"; import { useRoute, useNavigation } from "@react-navigation/native"; import Icon from "react-native-vector-icons/MaterialCommunityIcons"; import { fetchMaps } from "../services/maps"; const StatusIndicator = ({ text }) => ( <SafeAreaView style={styles.statusContainer}> <StatusBar barStyle="light-content" backgroundColor="#181d23" /> <ActivityIndicator size="large" color="#27F0C9" /> <Text style={styles.statusText}>{text}</Text> </SafeAreaView> ); const isValidCoord = (lat, lng) => Number.isFinite(lat) && Number.isFinite(lng) && Math.abs(lat) <= 90 && Math.abs(lng) <= 180; const MapsAnalysis = () => { const [points, setPoints] = useState([]); const [isLoading, setIsLoading] = useState(true); const [error, setError] = useState(null); const [showInfo, setShowInfo] = useState(false); const [mapReady, setMapReady] = useState(false); const route = useRoute(); const navigation = useNavigation(); const { userId } = route.params; const mapRef = useRef(null); useEffect(() => { const getLocations = … -
How to aggregate a group by queryset in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp), max(price) from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(price) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) price from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
Python Community from Central Asia
Python Community from Central Asia - we trying to create cool Central Asian python developers community Do we have some python developers? -
I can only run my backend tests locally because all the instances of the mocked environment are created and into the actual db because of celery
I want to create tests but every time I run a test it triggers celery and celery creates instances into my local db. that means that if I run those tests in the prod or dev servers, then it will create rubish there. maybe that will trigger other stuff and create problems into the db. how can I avoid all of that? how can I mock celery so it doesn't create troubles in the dev server or prod server while running tests? I tried some mocking throuth the @override_settings but it didn't work actually as I would like