Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
Django unicorn asgi concurrency performan issue
We are running Django with uvicorn asgi in kubernetes. Following best practice guides we are doing this with only 1 worker, and allowing the cluster to scale our pods up/down. We chose asgi as we wanted to be async ready, however currently our endpoints are all sync. Internally we are using our own Auth (micro service) which is a request to an internal pod using Pythons request library. This works via a JWT being passed up which we validate against our public keys then fetch User details/permissions. After this, it's all just ORM operations: a couple of .get() and some .create() When I hit our endpoint with 1 user this flies through at like 20-50ms. However as soon as we bump this up 2-5 Users, the whole thing comes to a grinding halt. And the requests start taking up to 3-5s. Using profiling tools we can see there's odd gaps of nothing between the internal Auth request finishing and then going on to do the next function. And similar in other areas. To me this seems to be simply a concurrency issue. Our 1 pod has 1 uvicorn worker and can only deal with 1 request. But why would they … -
Create question with options from same endpoint
so i am making a backend system using DRF in Django, this is my first project in django and drj, i am using django purely as a rest backend i am making a Quiz/mcq application this is from my questions app , models.py from django.db import models from classifications.models import SubSubCategory class Question(models.Model): ANSWER_TYPES = [ ('single', 'Single Correct'), ('multiple', 'Multiple Correct'), ] text = models.TextField() answer_type = models.CharField(max_length=10, choices=ANSWER_TYPES, default='single') difficulty = models.CharField( max_length=10, choices=[('easy', 'Easy'), ('medium', 'Medium'), ('hard', 'Hard')], default='medium' ) explanation = models.TextField(blank=True, null=True) subsubcategories = models.ManyToManyField(SubSubCategory, related_name='questions', blank=True) created_at = models.DateTimeField(auto_now_add=True) def __str__(self): return 'question' class Meta: ordering = ['-created_at'] def correct_options(self): return self.options.filter(is_correct=True) def incorrect_options(self): return self.options.filter(is_correct=False) class Option(models.Model): question = models.ForeignKey(Question, related_name='options', on_delete=models.CASCADE) label = models.CharField(max_length=5) text = models.TextField() is_correct = models.BooleanField(default=False) def __str__(self): return "options" and i am using Model viewset with router, but here when i try to create question , i am having to request in two different endpoint , one for creating question and another for creating options for questions views.py from rest_framework import viewsets from .models import Question, Option from .serializers import QuestionSerializer, OptionSerializer from core.permissions import IsAdminOrReadOnlyForAuthenticated from django.db.models import Q class OptionViewSet(viewsets.ModelViewSet): queryset = Option.objects.all() serializer_class = … -
Django difference between aware datetimes across DST
I'm working on a Django application in which I need to calculate the difference between timestamps stored in the DB. This week I run into some problems related to DST. In particular in the following code snippet: tEndUtc = tEnd.astimezone(timezone.utc) tStartUtc = tStart.astimezone(timezone.utc) total_timeUTC = tEndUtc- tStartUtc total_time = tEnd - tStart total_time (which uses the timezone aware timestamp stored in the DB) is shorter of 1 hour than the one with the total_timeUTC. I use have USE_TZ = true in the settings file. Here's what I get: tStart = datetime.datetime(2025, 10, 24, 0, 0, tzinfo=zoneinfo.ZoneInfo(key='Europe/Rome')) tEnd = datetime.datetime(2025, 10, 31, 23, 59, 59, 999999, tzinfo=zoneinfo.ZoneInfo(key='Europe/Rome')) tStartUtc = datetime.datetime(2025, 10, 23, 22, 0, tzinfo=datetime.timezone.utc) tEndUtc = datetime.datetime(2025, 10, 31, 22, 59, 59, 999999, tzinfo=datetime.timezone.utc) total_timeUTC = datetime.timedelta(days=8, seconds=3599, microseconds=999999) total_time = datetime.timedelta(days=7, seconds=86399, microseconds=999999) What is the correct way to handle DST? And in particular how does someone correctly calculate time difference across DST? The correct time delta is the one I get when using UTC. Having all the application built using timezone aware datetimes, I would like not change everything and convert to UTC timestamps. Thanks in advance. -
can i use get_or_create() function in django to assign a global variable?
i am an intern in a company and we are using django as framework and i was working on two part register system which admin make the initial Register and a link send via sms to user so user could complete the register,i know my code is bad i have a feeling to use get_or_create function to assign global variable but i'm afraid to break this(i use git but i still scared) class RegisterSerializer(serializers.ModelSerializer): """Class for registering users with multiple groups.""" # is_superuser = serializers.BooleanField(default=False, required=False, write_only=True) class Meta: fields = [ "national_code", "phone_number", ] model = User extra_kwargs = { "national_code": {"write_only": True, "validators": []}, "phone_number": {"write_only": True, "validators": []}, } def validate(self, attrs): if not attrs.get("national_code"): raise serializers.ValidationError(_("National code is required.")) if not attrs.get("phone_number"): raise serializers.ValidationError(_("Phone number is required.")) if User.objects.filter( phone_number=attrs.get("phone_number"), national_code=attrs.get("national_code"), is_complete=True, ).exists(): raise serializers.ValidationError(_("user already exists")) # if User.objects.filter(phone_number=attrs.get("phone_number")).exists(): # raise serializers.ValidationError(_("Phone number already exist.")) return attrs def create(self, validated_data): phone_number = validated_data["phone_number"] national_code = validated_data["national_code"] user, created = User.objects.get_or_create( phone_number=phone_number, national_code=national_code, defaults={"is_complete": False} ) token = RegisterToken.for_user(user) try: Sms.sendSMS( phone_number, f"{str(settings.DOMAIN_NAME)}/api/accounts/complete-register/?token={str(token)}", ) # do not delete this part soon or later we will use this # Sms.SendRegisterLink( # phone_number, # [ # { # … -
Django Celery Beat SQS slow scheduling
Beat seems to be sending the messages into SQS very slowly, about 100/minute. Every Sunday I have a sendout to about 16k users, and they're all booked for 6.30pm. Beat starts picking it up at the expected time, and I would expect a huge spike in messages coming into SQS at that time, but it takes its time, and I can see on the logs that the "Sending tasks x..." goes on for a few hours. I expect ~16k messages to go out around 6.30pm, and for the number of messages processed and deleted to pick up as the autoscale sets in. I have autoscaling on for my Celery workers, but because the number of messages doesn't really ever spike, the workers don't really scale until later, when the messages start backing up a bit. I'm really puzzled by this behaviour, anyone there know what I could be missing? I'm running celery with, some cron tab tasks but this one task in specific is a PeriodicTask celery_beat: celery -A appname beat --loglevel=INFO -
Django Mongodb Backend not creating collections and indexes
Summary Running Django migrations against our MongoDB database does not create MongoDB collections or indexes as defined in our app. The command completes without errors, but no collections or indexes are provisioned in MongoDB. Environment Django: 5.2.5 django-mongodb-backend: 5.2.2 Python: 3.11.14 Database setup: PostgreSQL as default, MongoDB as secondary via django-mongodb-backend Steps to Reproduce Configure DATABASES with a mongodb alias (see snippet below). Implement models that should live in MongoDB and include indexes/constraints. Implement a database router that routes models with use_db = "mongodb" to the mongodb DB. Run: python manage.py makemigrations mailbot_search_agent python manage.py migrate mailbot_search_agent --database=mongodb Expected MongoDB collections are created for the models that declare use_db = "mongodb". Declared indexes and unique constraints are created. If supported by backend, custom Atlas Search/Vector index definitions are applied. Actual migrate --database=mongodb completes, but: Collections are not created (or get created only after first write). Indexes defined in migrations (0002) and in model Meta/indexes are not present in MongoDB. Atlas Search/Vector indexes (declared via backend-provided Index classes) are not created. DATABASES Configuration (snippets) MONGO_CONNECTION_STRING = os.environ.get("MONGO_CONNECTION_STRING") MONGO_DB_NAME = os.environ.get("MONGO_DB_NAME", "execfn") DATABASES = { "default": { "ENGINE": "django.db.backends.postgresql", "NAME": "execfn", "USER": "execfn_user", "PASSWORD": os.environ.get("DJANGO_DB_PASSWORD"), "HOST": "localhost", "PORT": "5432", }, "mongodb": { … -
How to aggregate hierarchical data efficiently in Django without causing N+1 queries?
I’m working with a hierarchical model structure in Django, where each level can represent a region, district, or village. The structure looks like this: class Location(models.Model): name = models.CharField(max_length=255) parent = models.ForeignKey( 'self', on_delete=models.CASCADE, related_name='children', null=True, blank=True ) def __str__(self): return self.name Each Location can have child locations (for example: Region → District → Village). I also have a model that connects each location to a measurement point: class LocationPoint(models.Model): location = models.ForeignKey(Location, on_delete=models.CASCADE) point = models.ForeignKey('Point', on_delete=models.DO_NOTHING, db_constraint=False) And a model that stores daily or hourly measurement values: import uuid class Value(models.Model): id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False) point = models.ForeignKey('Point', on_delete=models.DO_NOTHING, db_constraint=False) volume = models.FloatField(default=0) timestamp = models.DateTimeField() Goal: I want to aggregate values (e.g., total volume) for each top-level region, including all nested child levels (districts, villages, etc.). Example: Region A → Total Volume: 10,000 Region B → Total Volume: 20,000 Problem: When I try to calculate these sums recursively (looping over children and summing their related Value records), the number of database queries increases dramatically — a classic N+1 query problem. Question: How can I efficiently compute aggregated values across a hierarchical model in Django — for example, summing all Value.volume fields for every descendant location — … -
Deploy Django and Nginx under subpath
I'm trying to deploy a Django app with Gunicorn and Nginx under a subpath, I'm inside a corporate network, and the path www.example.com/myapp points to the IP 192.168.192.77:8080 of my PC on the local network (I have no control over the pathing nor the corporate network, just that port exposed to the internet through /myapp). I tried many things including this: How to host a Django project in a subpath? , but it doesn't show the Django welcome page, just the Nginx welcome page. I also can't access to the Django admin page that should be on the path /myapp/admin, just a 404 page. This is the config of my site on the folder sites-available for Nginx: server { listen 8080; server_name 192.168.192.77; location /myapp/static/ { root /home/user/myapp; } location /myapp/ { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } } I tried proxy_set_header SCRIPT_NAME /myapp; but it didn't work. If I don't configure any paths, it shows the django welcome page at /myapp but then I can't acces /myapp/admin, also a 404. Curiously, if I start the Django development server using python manage.py runserver without nginx it works, the django welcome page shows at /myapp and I can access /myapp/admin with the … -
Django transaction.atomic() on single operation prevents race conditions?
Why I need to use atomic() when I have only 1 db operation inside atomic block? My AI-assistant tells me that it prevents race conditions, but I don't use select_for_update() inside. It tells that db looks on unique constraints and sets lock automatically but only when I use atomic(), but if I will use it without atomic() race conditions can be happened. Is it true? Can you explain this behaviour? I don't understand how it works if I have only one db operation inside. Code example: with atomic(): Model.objects.create(....) -
can't find xgettext or msguniq but gettext-base is installed
As part of a django project, I need to build translation *.po files, but I have the error CommandError: Can't find xgettext. Make sure you have GNU gettext tools 0.19 or newer installed. when I run django-admin makemessages -a and CommandError: Can't find msguniq. Make sure you have GNU gettext tools 0.19 or newer installed. when I run django-admin makemessages -l en. I see that what is missing is supposed to come from the os and I run Ubuntu 25.04. So I tried to run xgettext and msguniq on their own. Each time I get Command 'xgettext' not found, but can be installed with: sudo apt install gettext So I tried doing just that but apt fails with Error: Unable to locate package gettext. However when I try to run gettext -V I do have gettext v.0.23.1 installed. It seems to come from package gettext-base that is indeed installed but can't seem to be used. I searched this over the internet but can't seem to find anything helpful. I don't know if it is necessary but I do have python-gettext installed in my python venv also. Any idea how to make python find gettext in this situation? -
Encoding full payload and decoding in server in REST
Issue WAF is showing some errors due to including some HTML tags in my payload responses (mostly field-like messages and user guides). Sometimes, I am also sending R programming language code to the server, which will just be stored in the database. While doing WAF for security check, it gives a vulnerability issue saying HTML tags and code are detected. My current Solution So, our team proposed a solution to encode the entire payload and decode the encoded payload in the Django middleware. But I am wondering if this is the best approach after all? Validation and Question Will this approach be efficient in the long run? If you have faced same issue, can you please suggest the right approach? Thank You -
Deploying Dockerized (React + Django + PostgreSQL ) app with custom license to a client without exposing source code
I am running a test simulation on a virtual server in VirtualBox to see how the procedure of installing a web application using Docker would work on a client server. My stack includes: Frontend: React.js, built into a Docker image Backend: Django (Python) in Docker Database: PostgreSQL 16 in Docker Orchestration: Docker Compose managing all services Environment variables: Managed via .env.docker for the backend (database credentials, email settings, etc.) and for the frontend at build time (API URL) License: A custom license mechanism I implemented myself, which must be included and validated on the client server using license.json as the key sold to clients In my test: I built the backend and frontend Docker images locally on my development machine. For the frontend, I rebuilt the image with REACT_APP_API_URL=http://localhost:8000 so that it points to the local backend. I exported the backend and frontend images as .tar files to simulate distribution to a client server. On the client server (virtual machine), I loaded the images and tried running them using Docker Compose. I observed that if the frontend API URL is not baked in at build time, React requests go to undefined/users/.... Question: For a real client deployment using this stack, … -
How to Avoid JWT Collision While Receiving Bearer Token
I am doing a Django project where I am using JWT token for authentication. But the problem is that two different JWT tokens are both valid with the same signature that is provided in the backend with slight variation. What is the reason? I also tried the implementation in FastAPI using PyJWT the result was kind a same where two different tokens were accepted by the backend server. Valid Token from Backend With c at the end Other Forged Correct Tokens With d at the end With e at the end Other Forged Incorrect Tokens With b at the end With g at the end -
Can't insert rows into Supabase profile table even after creating the RLS policy to do so for the sign up feature
Again, I am quite new to Supabase so I apologize in advance if I don't provide clear details in this post or mess up with some terms or something Basically, I am doing auth using Supabase and have this table called "profiles" with columns: id - UUID username - text email - text now when I create a new account using Supabase, it works, the account gets registered and shows up in the auth tab, but the new row doesn't get inserted into profiles? user = response.user if user: resp = supabase.table("profiles").insert({ "id": user.id, "username": username, "email": email }).execute() print(resp) request.session["user_id"] = user.id request.session["username"] = username return redirect("home") Now, my RLS for the profiles table is: Enable insert for authenticated users only, INSERT, anon, authenticated and I am using a service key to create the supabase client. Even after all that, I keep getting the error -> APIError: {'message': 'new row violates row-level security policy for table "profiles"', 'code': '42501', ...} PLEASE HELP ME I HAVE NO IDEA HOW TO FIX THIS, I almost let AI take over my code atp but nahh I'm not that desperate 💔 -
Is it possible to force mysql server authentication using django.db.backends.mysql?
it's my first question on stack overflow because I can't find relevant information in Django documentation. Is it possible to force mysql server authentication with ssl using django.db.backends.mysql? I have checked its implementation in Django Github and it seems it supports only 3 ssl arguments: ca, cert and key. What I need is equivalent of --ssl-mode=VERIFY_IDENTITY. Has anyone found some workaround for this problem? Here is my current configuration. TLS channel is working as expected, but identity of MySQL server is not validated. DATABASES = { 'default': { 'ENGINE': 'django.db.backends.mysql', 'NAME': env('DB_NAME'), 'USER': env('DB_USER'), 'PASSWORD': env('DB_PASSWORD'), 'HOST': env('DB_HOST'), 'PORT': env('DB_PORT'), 'CONN_MAX_AGE': 600, 'OPTIONS':{ 'ssl':{ 'ca': env('CA_CERT'), 'cert': env('CERT'), 'key': env('KEY') } } } } -
How to reuse a Django model for multiple relationships
I want to make a task model and a user model. And I want each task to be able to be related to 3 users. Each task should be related to a creator user, an assignee user, and a verifier user. And I want to only have one user table. My inclination is to have 3 foreign keys on the task table: creator_id, assignee_id, and verifier_id. Is this the correct way to do it? How do I model that in Django? -
How to aggregate a group by query in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(high) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
How to aggregate a group by query in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(high) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
How to integrate JWT authentication with HttpOnly cookies in a Django project that already uses sessions, while keeping roles and permissions unified?
I currently have a monolithic Django project that uses Django’s session-based authentication system for traditional views (login_required, session middleware, etc.). Recently, I’ve added a new application within the same project (also under the same templates directory) that communicates with the backend via REST APIs (Django REST Framework) and uses JWT authentication with HttpOnly cookies. The goal is for both parts (the old and the new) to coexist: The legacy sections should continue working with regular session-based authentication. The new app should use JWT authentication to access protected APIs. The problem I’m facing is how to properly handle permissions and roles across both authentication systems (sessions and JWT) without duplicating logic or breaking compatibility. Here’s what I want to achieve: Roles and permissions (e.g., X, Y, Z) should be defined centrally in the backend (either using Django Groups or a custom Role model). On the backend, traditional views should use @login_required, while API views should use JWTAuthentication with custom permission classes. On the frontend, I want to show or hide sections, submenus, or information depending on the authenticated user’s roles and permissions. (How can I properly integrate this?) All of this must work within the same Django project and the same … -
How to aggregate a group by queryset in django?
I'm working with time series data which are represented using this model: class Price: timestamp = models.IntegerField() price = models.FloatField() Assuming timestamp has 1 min interval data, this is how I would resample it to 1 hr: queryset = ( Price.objects.annotate(timestamp_agg=Floor(F('timestamp') / 3600)) .values('timestamp_agg') .annotate( timestamp=Min('timestamp'), high=Max('price'), ) .values('timestamp', 'high') .order_by('timestamp') ) which runs the following sql under the hood: select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp Now I want to calculate a 4 hr moving average, usually calculated in the following way: select *, avg(high) over (order by timestamp rows between 4 preceding and current row) ma from (select min(timestamp) timestamp, max(price) high from core_price group by floor((timestamp / 3600)) order by timestamp) or Window(expression=Avg('price'), frame=RowRange(start=-4, end=0)) How to apply the window aggregation above to the first query? Obviously I can't do something like this since the first query is already an aggregation: >>> queryset.annotate(ma=Window(expression=Avg('high'), frame=RowRange(start=-4, end=0))) django.core.exceptions.FieldError: Cannot compute Avg('high'): 'high' is an aggregate -
Changing Django Model Field for Hypothesis Generation
I'm testing the generation of an XML file, but it needs to conform to an encoding. I'd like to be able to simple call st.from_model(ExampleModel) and the text fields conform to this encoding without needing to go over each and every text field. Something like: register_field_strategy(models.CharField, st.text(alphabet=st.characters(blacklist_categories=["C", "S"], blacklist_characters=['&']))) I searched around but didn't find anything which could do this. Anyone have any advice on how to make something like this works? -
Ligne sur Postgres invisible depuis l'ORM Django alors que visible depuis le Shell [closed]
Bonjour, Je rencontre un comportement incompréhensible entre le Shell Django (python manage.py shell) et le code exécuté via le serveur (python manage.py runserver). La requête ci-dessous renvoie un queryset vide depuis l'ORM Django, alors que depuis le Shell (ou PGAdmin avec une requête SQL brute), on a bien un retour non null : DossierValideur.objects.filter(id_dossier_id=185).values_list("id_instructeur", flat=True) C'est le seul id_dossier_id qui pose problème car cette même requête passe pour tous les autres. Contexte J’utilise Django + PostgreSQL avec plusieurs schémas (public, avis, documents, instruction, utilisateurs). Configuration : Django 5.1.7, PostgreSQL 15, Python 3.12, OS : Windows Dans mon settings.py : DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql', 'NAME': os.environ.get('BDD_NAME'), 'USER': os.environ.get('BDD_USER'), 'PASSWORD': os.environ.get('BDD_PASSWORD'), 'HOST': os.environ.get('BDD_HOSTNAME'), 'PORT': os.environ.get('BDD_PORT'), 'OPTIONS': { 'options': '-c search_path=public,avis,documents,instruction,utilisateurs' }, } } Modèles concernés Mes 2 modèles concernés (base identique mais schémas différents) : class Instructeur(models.Model): id = models.AutoField(primary_key=True) id_ds = models.CharField(unique=True, blank=True, null=True) email = models.CharField(unique=True) id_agent_autorisations = models.ForeignKey(AgentAutorisations, models.RESTRICT, db_column='id_agent_autorisations') class Meta: managed = False db_table = '"utilisateurs"."instructeur"' def __str__(self): if self.id_agent_autorisations : return f"{self.id_agent_autorisations.nom} {self.id_agent_autorisations.prenom}" else : return self.email class Dossier(models.Model): id = models.AutoField(primary_key=True) id_ds = models.CharField(unique=True, blank=True, null=True) id_etat_dossier = models.ForeignKey(EtatDossier, models.RESTRICT, db_column='id_etat_dossier') id_etape_dossier = models.ForeignKey(EtapeDossier, models.RESTRICT, db_column='id_etape_dossier', default=10) numero = models.IntegerField(unique=True) date_depot = … -
How can I set a fixed iframe height for custom preview sizes in Wagtail’s page preview?
I’m extending Wagtail’s built-in preview sizes to include some additional device configurations. Wagtail natively supports device_width, but not device_height. I’d like to define both width and height for the preview iframe instead of having it default to height: 100%. Here’s an example of my mixin that extends the default preview sizes: from wagtail.models import Page, PreviewableMixin from django.utils.translation import gettext_lazy as _ class ExtendedPreviewSizesMixin(PreviewableMixin): """Extend the default Wagtail preview sizes without replacing them.""" @property def preview_sizes(self): base_sizes = super().preview_sizes extra_sizes = [ { "name": "12_inch", "icon": "hmi-12", "device_width": 1280, "device_height": 800, # not supported by Wagtail by default "label": _("Preview in 12-inch screen"), }, { "name": "24_inch", "icon": "hmi-24", "device_width": 1920, "label": _("Preview in 24-inch screen"), }, ] return base_sizes + extra_sizes @property def preview_modes(self): base_modes = super().preview_modes extra_modes = [ ("custom", _("Custom Preview")), ("custom_with_list", _("Custom Preview with List")), ] return base_modes + extra_modes def get_preview_template(self, request, mode_name): if mode_name == "custom": return "previews/preview_custom.html" if mode_name == "custom_with_list": return "previews/preview_custom_with_list.html" return "previews/default_preview.html" By default, Wagtail sets the preview iframe width using: width: calc(var(--preview-iframe-width) * var(--preview-width-ratio)); There doesn’t seem to be an equivalent variable for height. Question: Is there a way to set a fixed iframe height for custom preview sizes … -
How to enable bulk delete in Wagtail Admin (Wagtail 2.1.1, Django 2.2.28)
I’m currently working on a project using Wagtail 2.1.1 and Django 2.2.28. In Django Admin, there’s a built-in bulk delete action that allows selecting multiple records and deleting them at once. However, in Wagtail Admin, this functionality doesn’t seem to exist in my current version. I want to implement a bulk delete feature for one of my custom models (not a snippet), similar to how Django Admin provides it. Here’s my model example: class membership(address): user = models.OneToOneField(m3_account, on_delete=models.CASCADE, unique=True) locality_site = models.ForeignKey(wagtailSite, null=True, on_delete=models.CASCADE) # Contact details: business_name = models.CharField(max_length=100, verbose_name='Business Name/Account Name') contact_name = models.CharField(max_length=100) phone = models.CharField(max_length=70, verbose_name="Phone Number") def __str__(self): return self.business_name Before I start writing custom logic for this, I’d like to know: Does Wagtail support bulk delete functionality in newer versions of the admin interface? If yes, from which version was it officially introduced or supported? Would upgrading from Wagtail 2.1.1 to a certain version allow me to use this feature directly (without registering my model as a snippet)? I want to keep using Wagtail’s admin interface (ModelAdmin) and not convert my model into a snippet. Any guidance on compatible versions or recommended upgrade paths would be appreciated. -
Django-Oscar: UserAddressForm override in oscar fork doesn't work
I need to override UserAddress model. My Steps: Make address app fork python manage.py oscar_fork_app address oscar_fork Override model from django.db import models from django.conf import settings from django.utils.translation import gettext_lazy as _ AUTH_USER_MODEL = getattr(settings, "AUTH_USER_MODEL", "auth.User") class Address(models.Model): city = models.CharField("city ", max_length=255, blank=False) street = models.CharField("street ", max_length=255, blank=False) house = models.CharField("house ", max_length=30, blank=False) apartment = models.CharField("apartment ", max_length=30, blank=True) comment = models.CharField("comment ", max_length=255, blank=True) class UserAddress(Address): user = models.ForeignKey( AUTH_USER_MODEL, on_delete=models.CASCADE, related_name="addresses", verbose_name=_("User"), ) from oscar.apps.address.models import * But I came across an error during makemigrations: django.core.exceptions.FieldError: Unknown field(s) (line1, phone_number, line4, notes, state, postcode, last_name, line2, country, line3, first_name) specified for UserAddress I tried to override UserAddressForm too: from django import forms from .models import UserAddress class UserAddressForm(forms.ModelForm): class Meta: model = UserAddress fields = [ "city", "street", "house", "apartment", "comment", ] But it doesn't work. What am I doing wrong?