Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
JavaScript Django default scaling using extends index
I'm using Django's template inheritance (extends) in every page of the app. The current design looks too zoomed out, and I want to adjust the default scaling through my index.html, but it didn't work. I also tried using custom CSS, but it still doesn't fix the issue. Does anyone have an idea how I can adjust the default scaling properly? I have this in my index.html <meta name="viewport" content="width=450, initial-scale=0.6, user-scalable=yes, minimum-scale=0.6, maximum-scale=0.6" /> -
Filter Django RangeField by comparing to a point, not to another range
The PostgreSQL specific model fields docs are very specific about how to compare one RangeField to another range. But how do you compare a range to a single point? For example, if I've got a model with valid_range=DateTimeRangeField, and I want to find all instances which are no longer valid, I need to do something like: from django.utils import timezone as tz MyProduct.objects.filter(valid_range__lt=tz.now()) But this isn't allowed. I thought I could use fully_lt but that's not allowed with a particular date either. How do I filter a DateTimeRangeField to find instances whose ranges ended before a certain datetime? -
502 Bad Gateway on AWS ELB with Nginx + Django + Gunicorn
Summary of Issue: 502 Bad Gateway from ELB to Django app behind Nginx + Gunicorn on EC2 Environment: Hi, I wonder if anyone can assist. I've been banging my head against a wall for over a week now, I've tried two different AI's and reviewed everything I can find here and on AWS. I have an Elastic Load Balancer (Application) and Auto Scaling Group. Everything is set up in line with best practice, html and php pages are served up fine. However django is not served to the public side of the ELB, it returns a 502 Bad Gateway. • AWS Elastic Load Balancer (ELB) in front of EC2 instance • ELB terminates HTTPS, forwards HTTP (port 80) to Nginx on EC2 • Nginx configured as reverse proxy forwarding /app/ requests to Gunicorn via Unix socket • SELinux is set as Permissible • Django app served by Gunicorn, running on EC2 • Nginx version 1.28.0, Gunicorn serving Django app on Unix socket /tmp/gunicorn_.sock • Django app uses virtual environment with dependencies installed per deployment script Observed Behavior: • curl -vk https:///test (simple Nginx endpoint) returns HTTP 200 OK correctly • curl --unix-socket /tmp/gunicorn_.sock http:///app/ returns HTTP 200 OK correctly — … -
How to use pytes fixtures in single Django TestCase testfunction
Test yields TypeError: test() missing 1 required positional argument: 'fix' from django.test import TestCase import pytest @pytest.fixture def fix(): return "x" class QueryTestCase(TestCase): def test(self, fix): print(fix) An almost similar case exists but I want the fixture to be used only in that particular test but not the class -
importing files twice from multiple files
imagine i have a "first.py" file with some code in it , and then i import it in another python file called "secend.py" then i import the "secend.py" file & the "first.py" into "third.py" file ,, Will this cause an performance problems? (For example, a file is imported twice inside another file .. ?) I always run into this problem in Django projects.(if it is a problem) for example i have my models file and i import the models file into serialazers file and then i import both models & serializers file into views file maybe im anot the best person at drawing things but there is a kinda stupid schematic that shows what im talking about! : -
Where are these PydanticDeprecatedSince20 and RemovedInDjango60Warning warnings coming from?
I am getting the following output in my warnings summary: venv/lib/python3.11/site-packages/pydantic/_internal/_config.py:323: 15 warnings /Users/darshankalola/Desktop/roon-be/roon-doctor-service/.venv/lib/python3.11/site-packages/pydantic/_internal/_config.py:323: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/ warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning) .venv/lib/python3.11/site-packages/django/db/models/fields/__init__.py:1148: 15 warnings /Users/darshankalola/Desktop/roon-be/roon-doctor-service/.venv/lib/python3.11/site-packages/django/db/models/fields/__init__.py:1148: RemovedInDjango60Warning: The default scheme will be changed from 'http' to 'https' in Django 6.0. Pass the forms.URLField.assume_scheme argument to silence this warning, or set the FORMS_URLFIELD_ASSUME_HTTPS transitional setting to True to opt into using 'https' as the new default scheme. return form_class(**defaults) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html Results (7.89s): 45 passed I have searched and ensure that none of my tests are producing these warnings. In fact I have no idea where they are coming from. I have updated required packages to their latest versions, and have corrected instances of deprecated functionality. -
Replacement for migrate --skip-checks
I am having some issues with the latest version of Django. It seems they have removed the --skip-checks option from the manage.py migrate command. The problem I am getting is that the app (that was working on 4.2) is trying to check the database tables (site_settings) before they exist. What is the right way to initially migrate data in Django 5.2.4 ? The error that I get now when running any manage.py command is: django.db.utils.ProgrammingError: relation "site_settings_setting" does not exist -
htmx web socket extension not rendering server messages
I'm using the below to render model initial counts on load. consumers.py: from channels.generic.websocket import WebsocketConsumer from django.template.loader import render_to_string from myapp.models import Model1, Model2, Model3, Model4 class DashboardHeaderConsumer(WebsocketConsumer): def update_dashboard_counts(self): stats = [ {'name': 'Model1', 'value': Model1.objects.count()}, {'name': 'Model2', 'value': Model2.objects.count()}, {'name': 'Model3', 'value': Model3.objects.count()}, {'name': 'Model4', 'value': Model4.objects.count()}, ] html = render_to_string('dashboard/header-stats.html', {'stats': stats}) self.send(text_data=html) def connect(self): self.accept() self.update_dashboard_counts() header-stats.html: {% for stat in stats %} <div class="col-sm-6 col-xl-3"> <div class="dashboard-stat rounded d-flex align-items-center justify-content-between p-4"> <div class="ms-3"> <p class="mb-2">{{ stat.name }}</p> <h6 class="mb-0">{{ stat.value }}</h6> </div> </div> </div> {% endfor %} my-template.html: {% load static %} {% block styles %} <link rel="stylesheet" href="{% static 'css/dashboard.css' %}"> {% endblock %} <div class="container-fluid pt-4 px-4"> <div class="row g-2 mb-2 stats-wrapper" hx-ext="ws" ws-connect="/ws/dashboard/header/" hx-target=".stats-wrapper" hx-swap="innerHTML" > </div> <div class="active-tasks-scroll-container"> <div class="row flex-nowrap g-2"> </div> </div> </div> I'm expecting the counts to show up. However, despite the message is received and can be seen in under the networks websocket request, the dom is empty. This is what gets rendered: <html lang="en" data-bs-theme="dark"> <head> <meta charset="UTF-8"> <title>App Title</title> <link href="/static/css/bootstrap.min.css" rel="stylesheet"> <link href="/static/css/index.css" rel="stylesheet"> <script src="/static/js/bootstrap.bundle.min.js"></script> <script src="/static/js/htmx.min.js"></script> <script src="/static/js/htmx-ext-ws%402.0.2"></script> <style> .htmx-indicator { opacity: 0 } .htmx-request .htmx-indicator { opacity: 1; transition: … -
Django + Tailwind CSS deployment failing on Railway with Procfile parsing errors
I'm trying to deploy a Django application with compiled Tailwind CSS to Railway, but I keep getting Procfile parsing errors. The build process works fine (Tailwind compiles successfully), but the deployment fails during the Procfile parsing stage. Error Message Nixpacks build failed Error: Reading Procfile Caused by: found unknown escape character at line 1 column 44, while parsing a quoted scalar My Setup Project Structure: Jobflow/ ├── manage.py ├── Procfile ├── requirements.txt ├── package.json ├── tailwind.config.js ├── static/ │ └── css/ │ ├── input.css │ └── output.css (generated) ├── Jobflow/ │ ├── settings.py │ ├── wsgi.py │ └── urls.py └── JobFlow_app/ ├── models.py ├── views.py └── templates/ Current Procfile content: web: python manage.py runserver 0.0.0.0:$PORT requirements.txt: Django==4.2 python-decouple==3.8 gunicorn==21.2.0 whitenoise==6.6.0 psycopg2-binary==2.9.9 Pillow==10.1.0 dj-database-url==2.1.0 package.json scripts: { "scripts": { "build-css": "npx tailwindcss -i ./static/css/input.css -o ./static/css/output.css --watch", "build-css-prod": "npx tailwindcss -i ./static/css/input.css -o ./static/css/output.css --minify", "dev": "npm run build-css", "build": "npm run build-css-prod" }, "devDependencies": { "tailwindcss": "^3.4.0" } } What Works ✅ Tailwind CSS compilation - Build logs show: > npx tailwindcss -i ./static/css/input.css -o ./static/css/output.css --minify Rebuilding... Done in 417ms. ✅ Python dependencies installation - No errors during pip install ✅ Static file structure - All files are in … -
404 not found error Django URL with JavaScript fetch function
I'm building a Todo app with Django and JavaScript. I've reached the point where when I click a "trash" button, the note should be deleted, but it shows an error in the console, the reason for which is not clear to me, since I specified the correct URL path. The error is appears when I click the "trash" button. urlpatterns = [ path('', views.index, name="index"), path('add_note/', views.add_note, name="add_note"), path('del_note/<int:id>/', views.del_note, name="del_note"), ] Django view function for deleting a note is also created. @require_POST def del_note(request, id): del_obj = get_object_or_404(Note, id=id, user=request.user) del_obj.delete() return JsonResponse({"status": 200, "message": "Note deleted"}) And here is the HTML of the list with that "trash" button. <ul class="todo__list"> {% for note in notes %} <li class="todo__note flex" data-id="{{ note.id }}"> <div> <input type="checkbox" /> <span>{{ note.text }}</span> </div> <div class="delete__edit"> <button class="edit-btn" id="editBtn" type="button"> <img src="{% static 'images/edit.svg' %}" alt="" /> </button> <button class="delete-btn" id="deleteBtn" type="button"> <img src="{% static 'images/delete.svg' %}" alt="" /> </button> </div> </li> {% endfor %} </ul> And this is JS fetch function for send request to django urls "del_note" path. const noteList = document.querySelector(".todo__list"); //const delUrl = document.body.dataset.delNoteUrl; function getCSRFToken() { const tokenInput = document.querySelector("input[name='csrfmiddlewaretoken']"); return tokenInput ? tokenInput.value : ""; } … -
Django "makemigrations" stuck for ever
When I run python manage.py makemigrations, it just gets stuck. No matter how long I wait, it stays frozen forever—no logs, no output, nothing. I even changed my PostgreSQL database to the default SQLite database in settings.py, but it still didn’t help. Please help me out. I expected python manage.py makemigrations to detect my model changes and generate migration files. Instead, it just gets stuck with no output, logs, or errors—completely frozen. Here’s what I’ve tried so far: Switched from PostgreSQL to the default SQLite DB in settings.py to rule out DB-related issues. Deleted all old migration files (except init.py) and tried running makemigrations again. Cleared all pycache and .pyc files. Rebuilt the virtual environment from scratch and reinstalled all dependencies. Simplified the models (removed unnecessary fields, used proper types like DateTimeField instead of CharField for dates). Tried isolating the models in a new Django app within the same project. Ran makemigrations --verbosity 3 and --dry-run, but still no output or detection. Even created a brand new Django project and app with a minimal model, and it still gets stuck when running makemigrations. At this point, I’m not sure if it’s Django, my environment, or something corrupted deep in the … -
Django filter an m2m relation by a list of inputs (which must all match)
Let's take some Store and Book models as examples: class Book(Model): title = CharField(...) ... class Store(Model): books = ManyToManyField('Book', blank=True, related_name='stores') .... I receive a list of book titles and must return stores linked to those books. I need the option for both an AND query and an OR query. The OR query is rather simple; we only need a store to match once: Store.objects.filter(book__title__in=book_titles) However the AND query seems tricky. Perhaps I am simply too deep to notice, but so far I have only managed by chaining queries which is not very good, at least performance-wise. from django.db.models import Q filtering = Q() for book_title in book_title_list: filtering &= Q(id__in=Book.objects.get(title=book_title).stores) Store.objects.filter(filtering) This effectively creates an OUTER JOIN and a SELECT within the WHERE clause for every book title, which at 2 or 3 is not much but definitely not advisable when user input is not limited. Without explicitly looping and adding Q objects like this I have yet to obtain a query that actually works. More often than not, the query either only evaluates a single line of the m2m relation or behaves similarly to the OR query. As a reminder, the AND query needs all returned stores … -
how I push Django database sqlite3 and media folder on github for production server
The problem occurs when I delete the db.sqlite3 and media on .gitignore file. and write add command git add -A. the error comes fatal:adding files fail PS C:\Users\user\OneDrive - Education Department, Government of Punjab\Desktop\Django demos\Blog> git add -A error: read error while indexing media/uploads/23/09/21/web_extol_college.jpg: Invalid argument error: media/uploads/23/09/21/web_extol_college.jpg: failed to insert into database error: unable to index file 'media/uploads/23/09/21/web_extol_college.jpg' fatal: adding files failed -
Django REST Framework `pagination_class` on ViewSet is ignored
Describe the Problem I have a ModelViewSet in Django REST Framework designed to return a list of Order objects. To improve performance, I'm trying to implement custom pagination that limits the results to 65 per page. Despite setting the pagination_class property directly on my ViewSet, the API endpoint continues to return the full, unpaginated queryset (over 300 objects). It seems my custom pagination class is being completely ignored. My goal is for the API to return a paginated response with a count, next, previous, and a results list containing a maximum of 65 items when I request .../api/orders/?page=1. What I Tried Here is my setup: 1. pagination.py: I created a custom pagination class. # my_app/pagination.py from rest_framework.pagination import PageNumberPagination class CustomOrderPagination(PageNumberPagination): page_size = 65 page_size_query_param = 'page_size' max_page_size = 100 2. views.py: I assigned this custom class to my ViewSet. The queryset uses select_related and prefetch_related for performance. # my_app/views.py from rest_framework import viewsets from django.db.models import Sum from .models import Order from .serializers import OrderSlimSerializer from .pagination import CustomOrderPagination class OrderViewSet(viewsets.ModelViewSet): # I explicitly set the pagination class here pagination_class = CustomOrderPagination serializer_class = OrderSlimSerializer queryset = Order.objects.select_related('client').prefetch_related( 'orderitem_set__location__service_plan' ).annotate( total_amount=Sum('orderitem_set__service_plan__service_fee') ) # ... (permission_classes, filter_backends, etc.) ... 3. … -
getCSRFToken is not defined error, JavaScript
This is the part of the code in the Django + JavaScript Todo App that is responsible for deleting a note. I need a csrftoken for this, but the JS is showing me an error in the console. What did I do wrong and how can I fix it? Uncaught ReferenceError: getCSRFToken is not defined at HTMLButtonElement.<anonymous> (main.js:100:30) const delUrl = document.body.dataset.delNoteUrl; deleteBtn.addEventListener("click", (e) => { e.preventDefault(); if (e.target.classList.contains("delete-btn")) { const parentLi = e.target.closest(".todo__note"); const noteId = parentLi.getAttribute("data-id"); fetch(`${delUrl}/${noteId}`, { method: "POST", headers: { "X-CSRFToken": getCSRFToken(), }, }) .then((response) => response.json()) .then((data) => { if (data.status == "success") { parentLi.remove(); } }); } });``` Here is HTML, if need. <ul class="todo__list"> {% for note in notes %} <li class="todo__note flex" data-id="{{ note.id }}"> <div> <input type="checkbox" /> <span>{{ note.text }}</span> </div> <div class="delete__edit"> <button class="edit-btn" id="editBtn"> <img src="{% static 'images/edit.svg' %}" alt="" /> </button> <button class="delete-btn" id="deleteBtn"> <img src="{% static 'images/delete.svg' %}" alt="" /> </button> </div> </li> {% endfor %} </ul> -
how DRF undestand which filed in serialazer.py is related to which model field?
imagine i have a super simple serializer.py file : and i just want to use it ! nothing special .. so im going to write something like this (with a model class called "Product") & its going to work : But how DRF undrstand which field in serializer.py file belongs to which field in the "Product" class in models file ? (i told DRF nothing about it !? + considering that the API Model != Data Model ) -
Results of Questionnaire to be downloaded as a spreadsheet
So i have this Model namely Questionnaire in models.py file of a Django project class Questionnaire(models.Model): title = models.CharField(max_length=200) description = models.TextField(blank=True, null=True) formula = models.CharField( max_length=200, default='{total}', help_text="Formula to calculate the total score for this questionnaire. Use {total} and {number_of_questions} as placeholders." ) color = models.CharField( max_length=7, default='#000000', help_text="Color in HEX format. Examples: #FF5733 (red), #33FF57 (green)," " #3357FF (blue), #FF33A1 (pink), #A133FF (purple), #33FFF5 (cyan), #FF8C33 (orange)" ) what i want to do is i want to download the results of the Questionnaire in a spreed sheet form? i also have a admin.py file having the model there to represent or show it on UI like this class QuestionnaireAdmin(nested_admin.NestedModelAdmin): model = Questionnaire inlines = [QuestionInline] list_display = ['title', 'description', 'color'] search_fields = ['title', 'description'] so i think the best way to do this is to add an action button and the client be able to download it by a click -
Django google-auth-oauthlib insecure_transport error on Cloud Workstations despite HTTPS and SECURE_PROXY_SSL_HEADER
I'm developing a Django application on Firebase Studio environment. I'm trying to implement Google OAuth 2.0 for my users (doctors) to connect their Google Calendar accounts using the google-auth-oauthlib library. The application is accessed via the public HTTPS URL provided by Firebase (e.g., https://8000-firebase-onlinearsts-...cloudworkstations.dev). I've configured my Google Cloud Project, enabled the Calendar API, set up the OAuth consent screen, and created an OAuth 2.0 Client ID for a Web application with the correct https:// Authorized redirect URI (https://8000-firebase-onlinearsts-1753264806380.cluster-3gc7bglotjgwuxlqpiut7yyqt4.cloudworkstations.dev/accounts/google/callback/). However, when my Django application's OAuth callback view (accounts.views.google_oauth_callback) attempts to exchange the authorization code for tokens using flow.fetch_token(), I get the following error: Google Authentication Error An error occurred during the Google authentication process. Error details: Error during OAuth exchange: (insecure_transport) OAuth 2 MUST utilize https. I cannot understand why Im receiving this error if I am utilizing https. mysite/mysite/settings.py: SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') # Google API Settings GOOGLE_CLIENT_ID = '...' GOOGLE_CLIENT_SECRET = '...' GOOGLE_REDIRECT_URI = 'https://8000-firebase-onlinearsts-1753264806380.cluster-3gc7bglotjgwuxlqpiut7yyqt4.cloudworkstations.dev/accounts/google/callback/' # Matches Google Cloud Console GOOGLE_CALENDAR_SCOPES = [ 'https://www.googleapis.com/auth/calendar.events', 'https://www.googleapis.com/auth/calendar.readonly', 'https://www.googleapis.com/auth/calendar', ] To investigate why the insecure_transport error persists, I added debugging print statements to my callback view (accounts.views.google_oauth_callback) to inspect the incoming request headers and properties: accounts/views.py: @login_required def google_oauth_callback(request): flow = … -
Django, HTMX, class based generic views, querysets and pagination
I think this is as much a question about minimalism and efficiency, but anyway... I have a generic ListView that I'm using, along with HTMX which I'm a first time user of, but loving it so far! That said, I have some quirks here with the default behavior of a generic class based view that I'm not sure how to handle. Considering the following... class AccountListView(ListView): model = Account template_name = 'account_list.html' paginate_by = 100 def get_queryset(self): query = self.request.POST.get('query') try: query = int(query) except: pass if query: if isinstance(query, int): return Account.objects.filter( Q(id=query) ) else: return Account.objects.filter( Q(full_name__icontains=query) | Q(email1=query) | Q(email2=query) | Q(email3=query) ).order_by('-date_created', '-id') return Account.objects.all().order_by('-date_created', '-id') def post(self, request, *args, **kwargs): response = super().get(self, request, *args, **kwargs) context = response.context_data is_htmx = request.headers.get('HX-Request') == 'true' if is_htmx: return render(request, self.template_name + '#account_list', context) return response def get(self, request, *args, **kwargs): response = super().get(self, request, *args, **kwargs) context = response.context_data is_htmx = request.headers.get('HX-Request') == 'true' if is_htmx: return render(request, self.template_name + '#account_list', context) return response As you can likely gather, my issue here is I'm trying to implement two different functionalities in a single generic view... a quick-search, that checks whether the user has submitted an integer … -
How to get a billing cycle period between the 26th of the previous month and the 25th of the current month using Python (timezone-aware)?
The Problem I'm building a billing system in Django, and I need to calculate the billing period for each invoice. Our business rule is simple: The billing cycle starts on the 26th of the previous month at midnight (00:00:00); And ends on the 25th of the current month at 23:59:59. For example, if the current date is 2025-07-23, the result should be: start = datetime(2025, 6, 26, 0, 0, 0) end = datetime(2025, 7, 25, 23, 59, 59) We're using Django, so the dates must be timezone-aware (UTC preferred), as Django stores all datetime fields in UTC. The problem is: when I run my current code (below), the values saved in the database are shifted, like 2025-06-26T03:00:00Z instead of 2025-06-26T00:00:00Z. What We Tried We tried the following function: from datetime import datetime, timedelta from dateutil.relativedelta import relativedelta def get_invoice_period(reference_date: datetime = None) -> tuple[datetime, datetime]: if reference_date is None: reference_date = datetime.now() end = (reference_date - timedelta(days=1)).replace(hour=23, minute=59, second=59, microsecond=0) start = (reference_date - relativedelta(months=1)).replace(day=26, hour=0, minute=0, second=0, microsecond=0) return start, end But this causes timezone problems, and datetime.now() is not timezone-aware in Django. So when we save these values to the database, Django converts them to UTC, shifting the … -
How to parse multipart/form-data from a put request in django
I want to submit a form to my backend and use the form data as the initial value for my form. Simple stuff if you are using a POST request: def intervals(request, **kwargs): form = MyForm(initial=request.POST) However, I am sending a form that should replace the current resource, which should idiomatically be a PUT request (I am using HTMX which allows you to do that). The problem is that I cannot find out how I can parse the form data from a put request. request.PUT does not exist and QueryDict only works for query params. What am I missing here? -
How many keys can I use store in Django file-based cache before it becomes a performance bottleneck?
I'm working with a large number of small-sized data entries (typically 2–3 KB each) and I'm using Django's file-based cache backend for storage. I would like to understand the scalability limits of this approach. Specifically: Is there a practical or recommended limit to the number of cache keys the file-based backend can handle efficiently? At what point (number of keys or total cache size) might I start seeing performance degradation or bottlenecks? Are there any known issues or filesystem-level constraints that I should be aware of when caching tens or hundreds of thousands of small files? I'm open to alternative caching strategies if the file-based backend is not well-suited for this use case. What is the most suitable Django cache backend for storing a high volume of small entries (possibly tens or hundreds of thousands)? -
How to swap between detail views for user specific datasets in django (python)?
On a Table (lets call it Items) I open the detail views for the item. On the detail view I want a "next" and a "previous" button. The button should open the next items detail view. I cannot just traverse through all datasets because the user cannot access other users datasets. I though about using a doubly linked list where the data in the nodes contain the id of the current dataset and as pointers the next and previous item id. When the user reaches the tail he automatically goes to the head and the other way around. But I dont want to load this list every time the user is opening the next detail view. Is there a ressource friendly way to swap between detail views without just increasing the id? -
Is it possible to run Django migrations on a Cloud SQL replica without being the owner of the table?
I'm using Google Cloud SQL for PostgreSQL as an external primary replica, with data being replicated continuously from a self-managed PostgreSQL source using Database Migration Service (DMS) in CDC mode. I connected a Django project to this replica and tried to run a migration that renames a column and adds a new one: uv run python manage.py migrate However, I get the following error: django.db.utils.ProgrammingError: must be owner of table camera_manager_invoice This makes sense, since in PostgreSQL, ALTER TABLE requires table ownership. But in this case, the replica was created by DMS, so the actual table owner is the replication source — and not the current user. 🔍 The Problem: I'm trying to apply schema changes via Django migrations on a Cloud SQL replica that I do not own. The replication is working fine for data (CDC), but I need to apply structural changes on the replica independently. ✅ What I Tried: Changing the connected user: still not the owner, so same error. Running sqlmigrate to get the SQL and applying manually: same result — permission denied. Attempted to change ownership of the table via ALTER TABLE ... OWNER TO ...: failed due to not being superuser. Tried running migration … -
Why is my Cloud SQL external replica not reflecting schema changes (like new columns) after Django migrations?
I'm using Google Cloud Database Migration Service (DMS) to replicate data from a self-managed PostgreSQL database into a Cloud SQL for PostgreSQL instance, configured as an external primary replica. The migration job is running in CDC mode (Change Data Capture), using continuous replication. Everything seems fine for data: new rows and updates are being replicated successfully. However, after running Django’s makemigrations and migrate on the source database — which added new columns and renamed others — the schema changes are not reflected in the Cloud SQL replica. The new columns simply don’t exist in the destination. 🔍 What I’ve done: Source: self-managed PostgreSQL instance. Target: Cloud SQL for PostgreSQL set as an external replica. Replication user has proper privileges and is connected via mTLS. The job is active, with "Optimal" parallelism and healthy status. Data replication (INSERT/UPDATE/DELETE) works great. Schema changes like ALTER TABLE, ADD COLUMN, RENAME COLUMN are not reflected in the replica. ❓ Question: How can I configure DMS or Cloud SQL to also replicate schema changes (like ALTER TABLE or CREATE COLUMN) from the source to the replica? Or is it necessary to manually apply schema changes on the target? I'm fine with workarounds or official recommendations …