Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
Django ORM: Add integer days to a DateField to annotate next_service and filter it (PostgreSQL)
I am trying to annotate a queryset with next_service = last_service + verification_periodicity_in_days and then filter by that date. I am on Django 5.2.6 with PostgreSQL. last_service is a DateField. verification_periodicity lives on a related SubCategory and is the number of days (integer). Models (minimal): # main/models.py class Category(models.Model): name = models.CharField(max_length=100) class SubCategory(models.Model): category = models.ForeignKey(Category, on_delete=models.CASCADE, related_name='subcategories') name = models.CharField(max_length=100) verification_periodicity = models.IntegerField() # days class Asset(models.Model): sub_category = models.ForeignKey(SubCategory, on_delete=models.PROTECT) last_service = models.DateField(null=True, blank=True) Goal: Compute next_service = last_service + verification_periodicity days in the database, expose it in the API, and support filtering like ?next_date__gte=2025-12-06. What I tried: Simple cast and multiply: from django.db.models import ExpressionWrapper, F, DateField, IntegerField from django.db.models.functions import Cast qs = qs.annotate( next_service = ExpressionWrapper( F('last_service') + Cast(F('sub_category__verification_periodicity'), IntegerField()) * 1, output_field=DateField() ) ) This does not shift by days and later caused type issues. Filtering by the annotated date also did not work as expected. Using a Python timedelta: from datetime import timedelta qs = qs.annotate( next_service = F('last_service') + timedelta(days=1) * F('sub_category__verification_periodicity') ) This produced a duration in seconds in the serialized output. Example: "next_service": "86400.0" for one day, rather than a proper date. I need a date. Errors seen along … -
Stuck with django asgi server (dpahne) and aws eb (with docker)
I’m trying to deploy a Django application that uses Channels + ASGI + Daphne on AWS Elastic Beanstalk with the Docker platform. My container builds successfully, migrations run, and Daphne starts properly on 0.0.0.0:8000. Logs show the ASGI server is running without errors. The issue is that Elastic Beanstalk is not routing traffic to the Daphne server inside the Docker container. Here’s what’s happening: docker logs shows Daphne listening on 0.0.0.0:8000 The container starts cleanly (no errors) curl <container-ip>:8000/ works curl http://localhost/ on the host does not reach Daphne /health/ returns nothing because Django had no route (fixed now) Elastic Beanstalk environment loads but the site doesn’t respond externally It seems like NGINX inside EB is not proxying requests to the container I think I need a correct NGINX proxy config or a proper EB .config file that routes traffic to the container’s internal IP/port. Can someone provide a working example of: ✅ Dockerfile ✅ entrypoint.sh ✅ EB .ebextensions config for ASGI/Daphne ✅ NGINX proxy config for forwarding WebSocket + HTTP traffic ✅ Any extra EB settings needed for Channels Basically, I need the correct setup so EB can forward all traffic to Daphne inside a Docker container. Any working … -
unable to access my EC2 ubuntu server with public ip:8000 [closed]
I associated public elastic ip address to my EC2 instance , Installed Virtual environment properly, python manage.py runserver 0.0.0.0:8000 is executed properly Postgresql is connected at port 5432 properly. Port 22, 80, 443 firewall are allowed properly here is the security group screen shot attached here are outbound rules when I run the sudo ufw status verbose I get the following it indicates my all needed ports are properly attached my routing tables , Network ACL are also set properly but when I try to access my server I get following errors. -
Django Not Saving Form Data
I fill the Django form in contact.html file. But form data is not saved in database or another place. There is no error or warning while saving the form data. Form screenshot: Form screenshot. views.py: from .forms import CnForm def contact(request): template = loader.get_template('contact.html') form = CnForm(request.POST or None) if form.is_valid(): form.save() context = {'form': form } return HttpResponse(template.render(context, request)) models.py: from django.db import models class FModel(models.Model): first_name = models.CharField(max_length=100) last_name = models.CharField(max_length=100) def __str__(self): return self.first_name forms.py: from django import forms from .models import FModel class CnForm(forms.ModelForm): class Meta: model = FModel fields = "__all__" contact.html: <div class="contact-container"> <form action = "" method = "post"> {% csrf_token %} {{ form }} <input type="submit" value="Submit"> </form> </div> -
403 Forbidden: "CSRF Failed: CSRF token missing." on DRF api-token-auth/ after applying csrf_exempt
I'm encountering a persistent 403 Forbidden error with the detail: CSRF Failed: CSRF token missing. This happens when trying to obtain an authentication token using Django REST Framework's built-in api-token-auth/ endpoint. Context I am sending a POST request from Postman (using raw and application/json for the body). The CSRF protection is interfering because Postman, as an external client, doesn't handle session cookies or CSRF tokens. I attempted to fix this by explicitly applying the @csrf_exempt decorator to the view in my urls.py, but the error remains. Configuration and Code Here are the relevant snippets from my project setup: 1. settings.py (Middleware and DRF Authentication) My middleware includes CSRF protection, and I have SessionAuthentication enabled, which seems to be causing the conflict. MIDDLEWARE = [ "django.middleware.security.SecurityMiddleware", "django.contrib.sessions.middleware.SessionMiddleware", "django.middleware.common.CommonMiddleware", "django.middleware.csrf.CsrfViewMiddleware", "django.contrib.auth.middleware.AuthenticationMiddleware", "django.contrib.messages.middleware.MessageMiddleware", "django.middleware.clickjacking.XFrameOptionsMiddleware", ] REST_FRAMEWORK = { 'DEFAULT_PAGINATION_CLASS': 'rest_framework.pagination.PageNumberPagination', 'PAGE_SIZE': 10, 'DEFAULT_AUTHENTICATION_CLASSES': ( 'rest_framework.authentication.TokenAuthentication', 'rest_framework.authentication.SessionAuthentication', ), 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.IsAuthenticated', ), } 2. urls.py (Where csrf_exempt is applied) This is how I'm currently trying to exempt the view. I have imported csrf_exempt from django.views.decorators.csrf. from django.contrib import admin from django.urls import path,include from rest_framework.authtoken import views from django.views.decorators.csrf import csrf_exempt from api import views as api_views urlpatterns = [ path("home/",include("expenses.urls")), path("admin/", admin.site.urls), path("api/",include("api.urls")), … -
'django.db.utils.ProgrammingError: relation "users_user" does not exist' error while running ' python manage.py migrate_schemas --shared'
AccrediDoc - Multi-tenant Accreditation Management System A comprehensive Django-based multi-tenant accreditation management system designed for healthcare organizations in India. Manage NABL, NABH, ISO 15189 and other healthcare accreditations with ease. Features Multi-tenant Architecture: Secure, isolated environments for multiple organizations Document Management: Upload, version, and track accreditation documents with expiry alerts Compliance Tracking: Monitor compliance status with interactive checklists and evidence tracking User Management: Role-based access control with different user roles Accreditation Types: Support for multiple accreditation standards and clauses Reporting: Generate comprehensive compliance and performance reports Audit Logging: Complete audit trail for all system activitieswhile running migration for sharred schema i got following error it is multitenant Django app with five different apps System check identified some issues: WARNINGS: ?: (staticfiles.W004) The directory 'D:\workik_projects\AccrediDoc_V3\static' in the STATICFILES_DIRS setting does not exist. [standard:public] === Starting migration [standard:public] System check identified some issues: WARNINGS: ?: (staticfiles.W004) The directory 'D:\workik_projects\AccrediDoc_V3\static' in the STATICFILES_DIRS setting does not exist. [standard:public] Operations to perform: [standard:public] Apply all migrations: admin, auth, contenttypes, django_celery_beat, sessions [standard:public] Running migrations: [standard:public] Applying admin.0001_initial... Traceback (most recent call last): File "D:\workik_projects\AccrediDoc_V3\acc_venv\lib\site-packages\django\db\backends\utils.py", line 103, in _execute return self.cursor.execute(sql) psycopg2.errors.UndefinedTable: relation "users_user" does not exist The above exception was the direct cause of … -
Django: Annotate queryset based on the existence of many-to-many relationships
I am using Django 5.2 with Postgres 17 and try to modify the queryset of a ListView for row-based access control. Items are to be included in the queryset either if the user is part of a project that is directly related with the item, or if the project that is related with the item is set to be "visible". The user is presented with different information and interaction possibilities in the front end depending which of the two options ("accessible" vs "visible") was true. The following solution in theory yields the right result: def get_queryset(self): queryset = self.model.objects.all() if self.request.user.is_superuser: return queryset.annotate(accessible=Value(True)) # Cache all projects that the user is part of. A project is a Django-group # (one-to-one relationship) with some additional attributes. user_projects = Project.objects.filter(group__in=self.request.user.groups.all()) # Get all items that the user can access and mark them accordingly. accessible_items = ( queryset .filter(groups__project__in=user_projects) .annotate(accessible=Value(True)) ) # Get all items that the user can see (but not access), and mark them accordingly. visible_items = ( queryset .filter(groups__project__in=Project.objects.filter(visible=True)) .exclude(groups__project__in=user_projects) .annotate(accessible=Value(False)) ) return accessible_items.union(visible_items) The approach is simple enough and I'm not too concerned about efficiency, but there is a significant drawback. I'm using a union of two querysets, and … -
Django views spawn the error "cleansed = [self.cleanse_setting("", v) for v in value]" and go in infinite loops
I have two views that spawn these messages in "sudo systemctl status gunicorn": gunicorn: Nov 07 10:46:36 mysite gunicorn[2107]: cleansed = [self.cleanse_setting("", v) for v in value] Nov 07 10:46:36 mysite gunicorn[2107]: File "/home/mysite/anaconda3/envs/mysite/lib/python3.10/site-packages/django/views/debug.py", line 135, in cleanse_setting The whole site works, but any view.py that accesses a certain class spawn these errors. The views were totally working with no problem. I added 4 fields to the class's 'models.py' then removed them and migrated the database with: python manage.py makemigrations python manage.py migrate after that, the error started to show. Any help would be appreciated. -
appAccountToken not sent to backend during Apple coupon (reward) redemption using StoreKit 2
I'm integrating Apple In-App Purchases with StoreKit 2 in an iOS app. The backend (Django) handles subscription verification and links each transaction to a user using appAccountToken. Everything works fine for normal subscription purchases — the app sets the appAccountToken correctly, and it reaches the backend through the transaction data. However, during coupon / reward redemption (using Apple’s Reward Redemption Sheet), the appAccountToken is not included in the transaction payload that Apple sends to the backend. As a result, my backend can’t associate the redeemed subscription with the correct user account. How can we ensure that the appAccountToken is included (or reattached) during reward / coupon redemption using StoreKit 2? Is there any recommended way to set or restore the appAccountToken during the reward redemption flow? -
Upgrading Django to 5.2.7 causing error wth rest_framework_simplejwt as django.utils.timezone is depreciated
I am upgrading my Django project to v5.2.7. After installing requirements.txt with the upgraded versions of all libraries, I ran the command to validate the code python manage.py check But it is throwing error ImportError: Could not import 'rest_framework_simplejwt.authentication.JWTAuthentication' for API setting 'DEFAULT_AUTHENTICATION_CLASSES'. ImportError: cannot import name 'utc' from 'django.utils.timezone' (...\envs\localenv\Lib\site-packages\django\utils\timezone.py). Requirements.txt asgiref==3.8.1 certifi==2023.11.17 Django==5.2.7 django-cors-headers==4.3.1 djangorestframework==3.14.0 mysqlclient==2.2.0 PyJWT==2.8.0 pytz==2023.3 newrelic==9.0.0 djangorestframework_simplejwt==5.2.0 sqlparse==0.4.4 -
Error running developmental server in Django project some issue with migration
I have developed a project for SASS with django tenate. Whie migration i got following error it seem it is related to migration file (acc_venv) D:\workik_projects\AccrediDoc_v2>py manage.py makemigrations reports Traceback (most recent call last): File "D:\workik_projects\AccrediDoc_v2\manage.py", line 22, in <module> main() File "D:\workik_projects\AccrediDoc_v2\manage.py", line 19, in main execute_from_command_line(sys.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 442, in execute_from_command_line utility.execute() File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 416, in run_from_argv self.execute(\*args, \*\*cmd_options) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 457, in execute self.check(\*\*check_kwargs) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 492, in check all_issues = checks.run_checks( File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\checks\registry.py", line 89, in run_checks new_errors = check(app_configs=app_configs, databases=databases) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\contrib\auth\checks.py", line 101, in check_user_model if isinstance(cls().is_anonymous, MethodType): File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\db\models\base.py", line 537, in _init_ val = field.get_default() File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\db\models\fields\related.py", line 1176, in get_default if isinstance(field_default, self.remote_field.model): TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union (acc_venv) D:\workik_projects\AccrediDoc_v2>py manage.py makemigrations report Traceback (most recent call last): File "D:\workik_projects\AccrediDoc_v2\manage.py", line 22, in <module> main() File "D:\workik_projects\AccrediDoc_v2\manage.py", line 19, in main execute_from_command_line(sys.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 442, in execute_from_command_line utility.execute() File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\_init_.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 416, in run_from_argv self.execute(\*args, \*\*cmd_options) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 457, in execute self.check(\*\*check_kwargs) File "D:\workik_projects\AccrediDoc_v2\acc_venv\lib\site-packages\django\core\management\base.py", line 492, in check all_issues = checks.run_checks( … -
Django static images not showing on Vercel
I'm deploying my Django project to Vercel, and everything works fine locally but after deployment, images from the static folder are not showing. Project structure: datafam/ ├── settings.py ├── wsgi.py static/ └── teams/ └── image/ ├── Abu Sofian.webp ├── Crystal Andrea Dsouza.webp templates/ └── teams/ └── index.html staticfiles/ vercel.json requirements.txt file vercel.json: { "builds": [ { "src": "datafam/wsgi.py", "use": "@vercel/python", "config": { "maxLambdaSize": "100mb", "runtime": "python3.12" } } ], "routes": [ { "src": "/(.*)", "dest": "datafam/wsgi.py" } ] } What I’m trying to achieve I just want my static images (under /static/teams/image/) to be correctly served after deploying to Vercel — exactly the same way Django serves them locally using {% static %} in templates. file index.html: {% extends "base.html" %} {% load static %} {% block head_title %} {{title}} {% endblock head_title %} {% block content %} <section class="dark:bg-neutral-900 bg-white py-20" > <div class="container mx-auto px-4 text-center"> <p class="text-4xl md:text-5xl font-extrabold dark:text-gray-100 text-gray-800">Team Us</p> <p class="mt-16 text-lg text-gray-600 dark:text-gray-400 max-w-4xl mx-auto"> Meet the passionate and dedicated individuals who form the core of our community. Our team is committed to fostering a collaborative and supportive environment for all data enthusiasts. </p> </div> {# Mengubah container untuk menggunakan flex-wrap dan gap … -
"SMTPAuthenticationError: Authentication disabled due to threshold limitation" on production server on AWS
I've set-up email sending in my Django project that is deployed on AWS. When I run it locally the emails go out without a problem, but when I try it on production server on EC2 ubuntu VM, I get smtplib.SMTPAuthenticationError: (535, b'5.7.0 Authentication disabled due to threshold limitation') error. My settings are the same on both machines: EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'mail.my-provider.com' EMAIL_PORT = 1025 EMAIL_HOST_USER = 'me@my-provider.com' EMAIL_HOST_PASSWORD = 'mypassword' Is there anything specific I need to do to be able to send emails form AWS? My outbound rules are set wide open. -
Cloud Storage + Cloud Tasks for async webhook processing on Cloud Run - best practice
I've been looking around for an answer to this, but struggling to find something definitive. My apologies if I've overlooked something obvious. I'm processing webhooks on Cloud Run (Django) that need async handling because processing takes 30+ seconds but the webhook provider times out at 30s. Since Cloud Run is stateless and spins up per-request (no persistent background workers like Celery), I'm using this pattern: # 1. Webhook endpoint def receive_webhook(request): blob_name = f"webhooks/{uuid.uuid4()}.json" bucket.blob(blob_name).upload_from_string(json.dumps(request.data)) webhook = WebhookPayload.objects.create(gcs_path=blob_name) create_cloud_task(payload_id=webhook.id) return Response(status=200) # Fast response And then our cloud task calls the following endpoint with the unique path to the cloud storage url passed from the original webhook endpoint: def process_webhook(request): webhook = WebhookPayload.objects.get(id=request.data['payload_id']) payload = json.loads(bucket.blob(webhook.gcs_path).download_as_text()) process_data(payload) # 30+ seconds bucket.blob(webhook.gcs_path).delete() Is GCS + Cloud Tasks the right pattern for Cloud Run's stateless model, or is storing JSON directly temporarily in a django model fine since Cloud Tasks handles the queueing? Does temporary storage in GCS rather than in Postgres provide meaningful benefits? Should I be using Pub/Sub instead? Seems more for event broadcasting; I just need to invoke one endpoint. Thanks for any advice that comes my way. -
How do you customise these 3 dots in wagtail?
i want to add another option in it which is send Email, which will send email to all the subscribers class FeaturedPageViewSet(SnippetViewSet): model = FeaturedPages menu_label = "Featured Pages" menu_icon = "grip" menu_order = 290 add_to_settings_menu = False exclude_from_explorer = False list_display = ("blog", "workshop", "ignore") search_fields = ("blog", "workshop", "ignore") list_filter = ("ignore",)``` (https://i.sstatic.net/fzKv5gM6.png) -
Django app static files recently started returning 404s, deployed by Heroku
The static files in my Django production app recently started returning 404s. Screenshot of production site with dev tools open Context This project has been deployed without issue for several years. I have not pushed changes since September. I am unsure when the 404s began. The staging version of my Heroku app loads the static assets Screenshot of staging site with dev tools open Investigation I read the most recent Whitenoise documentation; my app still follows their setup guidance. You can see my settings here (n.b., the project is open source). I also ran heroku run python manage.py collectstatic --app APP_NAME directly. I am aware of this related post, too: Heroku static files not loading, Django -
Django Rest Framework ListAPIView user permissions - Cant seem to get them working
I have a Django project with DjangoRestFramework. I have a simple view, Facility, which is a ListAPIView. Permissions were generated for add, change, delete and view. I have create a new user, and have assigned him no permissions. Yet he is able to call GET on facility. class FacilityListView(ListAPIView): queryset = Facility.objects.all() serializer_class = FacilitySerializer permission_classes = [IsAuthenticated, DjangoModelPermissions] def get(self, request): self.check_permissions(request) facilities = Facility.objects.all() serializer = FacilitySerializer(facilities, many=True) return Response(serializer.data) If I test user permissions, I get an empty list. perms = list(user.get_all_permissions()) If I check whether the permission exists, I get the Facility model as result a = Permission.objects.get(codename='view_facility') However, if I check which permissions are required for Facility, I also get an empty list. p = perm.get_required_permissions('GET', Facility) The model is as basic as it can be from django.db import models class Facility(models.Model): name = models.CharField(max_length=200) created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return self.name This is what it says in my settings, and I have no custom permissions classes or anything. REST_FRAMEWORK = { 'DEFAULT_AUTHENTICATION_CLASSES': ( 'API.authentication.JWTAuthenticationFromCookie', ), 'DEFAULT_PERMISSION_CLASSES': [ 'rest_framework.permissions.IsAuthenticated', 'rest_framework.permissions.DjangoModelPermissions', ], } Unfortunately, I have not been able to find an answer to my problem. If anyone has any idea, that would be greatly appreciated! -
Django gunicorn gevent with Statsig - run code in forked process
I am running Django app with gunicorn gevent workers. I'm using Statsig for Feature Flagging. It appears to be struggling, I assume due to gevents monkey patching. I was hoping I could get around this by running Statsig post app start up, specifically only in the forked processes, not the main process. It does init() but then never updates it's internal cache - so my Feature Gates always return the first value they got, and never fetch new ones. Has anyone had any similar issues at all? -
Django Materialized View Refreshing Celery Task is freezing in DB layer for larger dataset
In my application, written in Django 5.2, I want to aggregate static data and store it as a materialized view in PostgreSQL 16.10. I can create it as my own migration or using the django-materialized-view library. I have no problems creating it. However, when I call the celery task, which should refresh the view after updating the data, it “freezes” when I enable updates for all three carriers. On the other hand, if I remove the third carrier (whose data weight is approximately 95% of the total of the three), the refresh task runs without any problems. I could blame this on the giant size of the data, but if I run the update only for this giant carrier or write the refresh command myself in DBMS, it executes successfully in 20-30 seconds. The Celery worker that performs update and refresh tasks has concurrency=1 (the refresh task is, of course, the last in the queue), and the configuration of work_mem, maintanence_work_mem, and shared_buffers in the database should definitely be able to handle this task. During the update, no other queries are executed in the database, and the refresh is CONCURRENTLY. You can find my project in the GitHub repository: text … -
Django unicorn asgi concurrency performan issue
We are running Django with uvicorn asgi in kubernetes. Following best practice guides we are doing this with only 1 worker, and allowing the cluster to scale our pods up/down. We chose asgi as we wanted to be async ready, however currently our endpoints are all sync. Internally we are using our own Auth (micro service) which is a request to an internal pod using Pythons request library. This works via a JWT being passed up which we validate against our public keys then fetch User details/permissions. After this, it's all just ORM operations: a couple of .get() and some .create() When I hit our endpoint with 1 user this flies through at like 20-50ms. However as soon as we bump this up 2-5 Users, the whole thing comes to a grinding halt. And the requests start taking up to 3-5s. Using profiling tools we can see there's odd gaps of nothing between the internal Auth request finishing and then going on to do the next function. And similar in other areas. To me this seems to be simply a concurrency issue. Our 1 pod has 1 uvicorn worker and can only deal with 1 request. But why would they … -
Create question with options from same endpoint
so i am making a backend system using DRF in Django, this is my first project in django and drj, i am using django purely as a rest backend i am making a Quiz/mcq application this is from my questions app , models.py from django.db import models from classifications.models import SubSubCategory class Question(models.Model): ANSWER_TYPES = [ ('single', 'Single Correct'), ('multiple', 'Multiple Correct'), ] text = models.TextField() answer_type = models.CharField(max_length=10, choices=ANSWER_TYPES, default='single') difficulty = models.CharField( max_length=10, choices=[('easy', 'Easy'), ('medium', 'Medium'), ('hard', 'Hard')], default='medium' ) explanation = models.TextField(blank=True, null=True) subsubcategories = models.ManyToManyField(SubSubCategory, related_name='questions', blank=True) created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return 'question' class Meta: ordering = ['-created_at'] def correct_options(self): return self.options.filter(is_correct=True) def incorrect_options(self): return self.options.filter(is_correct=False) class Option(models.Model): question = models.ForeignKey(Question, related_name='options', on_delete=models.CASCADE) label = models.CharField(max_length=5) text = models.TextField() is_correct = models.BooleanField(default=False) def __str__(self): return "options" and i am using Model viewset with router, but here when i try to create question , i am having to request in two different endpoint , one for creating question and another for creating options for questions views.py from rest_framework import viewsets from .models import Question, Option from .serializers import QuestionSerializer, OptionSerializer from core.permissions import IsAdminOrReadOnlyForAuthenticated from django.db.models import Q class OptionViewSet(viewsets.ModelViewSet): queryset = Option.objects.all() serializer_class = … -
Django difference between aware datetimes across DST
I'm working on a Django application in which I need to calculate the difference between timestamps stored in the DB. This week I run into some problems related to DST. In particular in the following code snippet: tEndUtc = tEnd.astimezone(timezone.utc) tStartUtc = tStart.astimezone(timezone.utc) total_timeUTC = tEndUtc- tStartUtc total_time = tEnd - tStart total_time (which uses the timezone aware timestamp stored in the DB) is shorter of 1 hour than the one with the total_timeUTC. I use have USE_TZ = true in the settings file. Here's what I get: tStart = datetime.datetime(2025, 10, 24, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Rome')) tEnd = datetime.datetime(2025, 10, 31, 23, 59, 59, 999999, tzinfo=zoneinfo.ZoneInfo(key='Europe/Rome')) tStartUtc = datetime.datetime(2025, 10, 23, 22, 0, tzinfo=datetime.timezone.utc) tEndUtc = datetime.datetime(2025, 10, 31, 22, 59, 59, 999999, tzinfo=datetime.timezone.utc) total_timeUTC = datetime.timedelta(days=8, seconds=3599, microseconds=999999) total_time = datetime.timedelta(days=7, seconds=86399, microseconds=999999) What is the correct way to handle DST? And in particular how does someone correctly calculate time difference across DST? The correct time delta is the one I get when using UTC. Having all the application built using timezone aware datetimes, I would like not change everything and convert to UTC timestamps. Thanks in advance. -
can i use get_or_create() function in django to assign a global variable?
i am an intern in a company and we are using django as framework and i was working on two part register system which admin make the initial Register and a link send via sms to user so user could complete the register,i know my code is bad i have a feeling to use get_or_create function to assign global variable but i'm afraid to break this(i use git but i still scared) class RegisterSerializer(serializers.ModelSerializer): """Class for registering users with multiple groups.""" # is_superuser = serializers.BooleanField(default=False, required=False, write_only=True) class Meta: fields = [ "national_code", "phone_number", ] model = User extra_kwargs = { "national_code": {"write_only": True, "validators": []}, "phone_number": {"write_only": True, "validators": []}, } def validate(self, attrs): if not attrs.get("national_code"): raise serializers.ValidationError(_("National code is required.")) if not attrs.get("phone_number"): raise serializers.ValidationError(_("Phone number is required.")) if User.objects.filter( phone_number=attrs.get("phone_number"), national_code=attrs.get("national_code"), is_complete=True, ).exists(): raise serializers.ValidationError(_("user already exists")) # if User.objects.filter(phone_number=attrs.get("phone_number")).exists(): # raise serializers.ValidationError(_("Phone number already exist.")) return attrs def create(self, validated_data): phone_number = validated_data["phone_number"] national_code = validated_data["national_code"] user, created = User.objects.get_or_create( phone_number=phone_number, national_code=national_code, defaults={"is_complete": False} ) token = RegisterToken.for_user(user) try: Sms.sendSMS( phone_number, f"{str(settings.DOMAIN_NAME)}/api/accounts/complete-register/?token={str(token)}", ) # do not delete this part soon or later we will use this # Sms.SendRegisterLink( # phone_number, # [ # { # … -
Django Celery Beat SQS slow scheduling
Beat seems to be sending the messages into SQS very slowly, about 100/minute. Every Sunday I have a sendout to about 16k users, and they're all booked for 6.30pm. Beat starts picking it up at the expected time, and I would expect a huge spike in messages coming into SQS at that time, but it takes its time, and I can see on the logs that the "Sending tasks x..." goes on for a few hours. I expect ~16k messages to go out around 6.30pm, and for the number of messages processed and deleted to pick up as the autoscale sets in. I have autoscaling on for my Celery workers, but because the number of messages doesn't really ever spike, the workers don't really scale until later, when the messages start backing up a bit. I'm really puzzled by this behaviour, anyone there know what I could be missing? I'm running celery with, some cron tab tasks but this one task in specific is a PeriodicTask celery_beat: celery -A appname beat --loglevel=INFO -
Django Mongodb Backend not creating collections and indexes
Summary Running Django migrations against our MongoDB database does not create MongoDB collections or indexes as defined in our app. The command completes without errors, but no collections or indexes are provisioned in MongoDB. Environment Django: 5.2.5 django-mongodb-backend: 5.2.2 Python: 3.11.14 Database setup: PostgreSQL as default, MongoDB as secondary via django-mongodb-backend Steps to Reproduce Configure DATABASES with a mongodb alias (see snippet below). Implement models that should live in MongoDB and include indexes/constraints. Implement a database router that routes models with use_db = "mongodb" to the mongodb DB. Run: python manage.py makemigrations mailbot_search_agent python manage.py migrate mailbot_search_agent --database=mongodb Expected MongoDB collections are created for the models that declare use_db = "mongodb". Declared indexes and unique constraints are created. If supported by backend, custom Atlas Search/Vector index definitions are applied. Actual migrate --database=mongodb completes, but: Collections are not created (or get created only after first write). Indexes defined in migrations (0002) and in model Meta/indexes are not present in MongoDB. Atlas Search/Vector indexes (declared via backend-provided Index classes) are not created. DATABASES Configuration (snippets) MONGO_CONNECTION_STRING = os.environ.get("MONGO_CONNECTION_STRING") MONGO_DB_NAME = os.environ.get("MONGO_DB_NAME", "execfn") DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "NAME": "execfn", "USER": "execfn_user", "PASSWORD": os.environ.get("DJANGO_DB_PASSWORD"), "HOST": "localhost", "PORT": "5432", }, "mongodb": { …