Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
sass.CompileError: File to import not found or readable
I am working on a django project and i decide to use django-simple-bulma but anytime run Python manage.py collectstatic i keep getting sass.compilererror Traceback (most recent call last): File "C:\Users\DIAWHIZ\desktop\themiraclemovement\manage.py", line 22, in <module> main() File "C:\Users\DIAWHIZ\desktop\themiraclemovement\manage.py", line 18, in main execute_from_command_line(sys.argv) File "C:\Users\DIAWHIZ\desktop\themiraclemovement\tmmenv\Lib\site-packages\django\core\management\__init__.py", line 442, in execute_from_command_line utility.execute() File "C:\Users\DIAWHIZ\desktop\themiraclemovement\tmmenv\Lib\site-packages\django\core\management\__init__.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "C:\Users\DIAWHIZ\desktop\themiraclemovement\tmmenv\Lib\site-packages\django\core\management\base.py", line 413, in run_from_argv self.execute(*args, **cmd_options) File "C:\Users\DIAWHIZ\desktop\themiraclemovement\tmmenv\Lib\site-packages\django\core\management\base.py", line 459, in execute output = self.handle(*args, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DIAWHIZ\desktop\themiraclemovement\tmmenv\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 209, in handle collected = self.collect() ^^^^^^^^^^^^^^ File "C:\Users\DIAWHIZ\desktop\themiraclemovement\tmmenv\Lib\site-packages\django\contrib\staticfiles\management\commands\collectstatic.py", line 126, in collect for path, storage in finder.list(self.ignore_patterns): File "C:\Users\DIAWHIZ\desktop\themiraclemovement\tmmenv\Lib\site-packages\django_simple_bulma\finders.py", line 216, in list files.extend(self._get_custom_css()) ^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DIAWHIZ\desktop\themiraclemovement\tmmenv\Lib\site-packages\django_simple_bulma\finders.py", line 187, in _get_custom_css css_string = sass.compile(string=scss_string, output_style=self.output_style) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\DIAWHIZ\desktop\themiraclemovement\tmmenv\Lib\site-packages\sass.py", line 725, in compile raise CompileError(v) sass.CompileError: Error: File to import not found or unreadable: []. on line 1:1 of stdin >> @import "[]"; -
Special Characters in Auto Slug Field
I'm trying to find the solution to a problem I encounter when using prepopulated_fields. I use this function to create a slug based on an entered title. However, if I insert special characters (+, -, etc.) in the title the slug cannot pick them up. Do you have any idea how to make these special characters read, so that the slug is consistent with the title? Example: Title | Samsung Galaxy S24+ Slug (should become) | samsung-galaxy-s24+ or something similar This is my current code: models.py class Post(models.Model): titolo = models.CharField(max_length=255) descrizione = RichTextUploadingField('Text', config_name='default', blank=True, null=True) slug = models.SlugField(unique=True) admin.py class PostAdmin(admin.ModelAdmin): prepopulated_fields = {'slug': ('titolo',)} -
Create smaller pdfs with pdfkit
In my django project, i am creating pdfs in python using pdfkit library. However my pdfs with 15 pages have 16mb size. my lib pdfkit==1.0.0 How can I reduce its size, considering my code above: import pdfkit from django.template.loader import render_to_string my_html = render_to_string('mypath/myhtml.html') pdfkit.from_string(my_html, False) -
Why does my XMLHttpRequest cancel my background task before reloading the page
I'm trying to send a XMLHttpRequest to my backend if a user chooses to reload the webpage while a task is running on the backend. It is to function like this: The user starts the task(translation). If the user decides to reload the page or navigate away they should get an alert that the task will stop if they navigate away. If they still choose to navigate away the request is sent to a stop_task view on the backend. Currently if the user starts to navigate away or reloads the task is terminated once the alert shows instead of after the user confirms that they still want to reload/navigate away. Here is my code JS: wwindow.addEventListener('beforeunload', function (e) { if (isTranslating) { stopTranslation(); e.preventDefault(); e.returnValue = ''; return 'Translation in progress. Are you sure you want to leave?'; } }); function stopTranslaion() { if (isTranslating && currentTaskId) { // Cancel the polling console.log(currentTaskId) clearInterval(pollInterval); // Send a request to the server to stop the task const xhr = new XMLHttpRequest(); xhr.onload = function() { if (xhr.status == 200) { const response = JSON.parse(xhr.responseText); if (response.status === 'stopped') { console.log('Translation stopped'); isTranslating = false; currentTaskId = null; // Update UI to … -
Django-ninja Webhook Server - Signature Error/Bad Request
I am working on a Django application where I have to develop a webhook server using Django-ninja. The webhook app receives a new order notification as described here: https://developer.wolt.com/docs/marketplace-integrations/restaurant-advanced#webhook-server My code below: @api.post("/v1/wolt-new-order") def wolt_new_order(request: HttpRequest): received_signature = request.headers.get('wolt-signature') if not received_signature: print("Missing signature") return HttpResponse('Missing signature', status=400) payload = request.body expected_signature = hmac.new( CLIENT_SECRET.encode(), payload, hashlib.sha256 ).hexdigest() print(f"Received: {received_signature}") print(f"Expected: {expected_signature}") if not hmac.compare_digest(received_signature, expected_signature): return HttpResponse('Invalid signature', status=400) print(payload) return HttpResponse('Webhook received', status=200) For some reason this always returns 'error code 400, bad request syntax' and the two signatures are always different. I am importing the CLIENT_SECRET correctly and I have all the necessary libraries properly installed. Funny enough when I do the same on a test Flask app, I receive the webhook notification correctly without issues. My webhook server is behind ngrok. Any ideas? What am I doing wrong here? Any suggestions? -
Most suitable psycopg3 installation method for a production Django website with PostgreSQL
I'm running a production website using Django with a PostgreSQL database. The official psycopg3 documentation (https://www.psycopg.org/psycopg3/docs/basic/install.html) describes three different installation methods: Binary installation Source installation C extension-only installation Which of these installation methods would be most performant and suitable for a production environment? Are there any specific considerations or trade-offs I should be aware of when choosing between these options for a Django project in production? I'm particularly interested in: Performance implications Stability and reliability Ease of maintenance and updates Any potential compatibility issues with Django -
How to set value for serializers field in DRF
I have a webpage in which I show some reports of trades between sellers and customers.So for this purpose, I need to create an api to get all of the trades from database, extract necessary data and serialize them to be useful in webpage.So I do not think of creating any model and just return the data in json format. first I created my serializers like this : from rest_framework import serializers from django.db.models import Sum, Count from account.models import User class IncomingTradesSerializer(serializers.Serializer): all_count = serializers.IntegerField() all_earnings = serializers.IntegerField() successful_count = serializers.IntegerField() successful_earnings = serializers.IntegerField() def __init__(self, *args, **kwargs): self.trades = kwargs.pop('trades', None) super().__init__(*args, **kwargs) def get_all_count(self, obj): return self.trades.count() def get_all_earnings(self, obj): return sum(trade.trade_price for trade in self.trades) def get_successful_count(self, obj): return self.trades.exclude(failure_reason=None).count() def get_successful_earnings(self, obj): return sum(trade.trade_price for trade in self.trades.exclude(failure_reason=None)) class TradesDistributionSerializer(serializers.Serializer): sellers = serializers.DictField() def __init__(self, *args, **kwargs): self.trades = kwargs.pop('trades', None) super().__init__(*args, **kwargs) def get_sellers(self, obj): sellers = {} for user in User.objects.all(): distributed_trades = self.trades.filter(creator=user) sellers[user.username] = sum( trade.trade_price for trade in distributed_trades) return sellers and then my apiView look like this : from rest_framework.views import APIView from rest_framework.response import Response from trade.models import Trade from report.serializers import IncomingTradesSerializer, TradesDistributionSerializer class IndicatorView(APIView): def get(self, … -
Django 4.2.15 on Cloud Run - Severe Slowdown with uWSGI and High `domLoading` Time, But Only in One Project
I'm encountering severe performance issues with my Django 4.2.15 application when running on Google Cloud Run. However, the problem seems to be isolated to this specific project, as other projects on Cloud Run do not exhibit the same behavior. Here are the details: Django version: 4.2.15 Environment: Google Cloud Run (1 Core, 2 GB Memory) Web Server: uWSGI 2.0.26 (Nginx is not being used) Database: MySQL (queries are not the bottleneck) Issue: The site is significantly slower on Cloud Run compared to my local environment. Observation: The Django Debug Toolbar shows that about 90% of the total time is spent during the domLoading phase, which is disproportionately long. Caching: The site performs well when serving cached views (i.e., when database access is avoided). Local Environment: The application runs very quickly locally without any noticeable delays. Cloud Run Details: The slowdown occurs even when no other users are accessing the service. This is not due to a cold start; the service is already running. Other projects on Cloud Run do not experience this issue. Additionally, I created a simple static HTML page with the following content: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Hello world</title> </head> <body> Hello world! </body> </html> … -
Django user login error, all request from the same user
Im trying to login users using the default Login System, however when a user login it rewrites all requests from other users like it was from the user that just login to the platform class CustomLoginView(LoginView): def __init__(self, **kwargs: Any) -> None: super().__init__(**kwargs) i make some test to reproduce the error: @patch('django.middleware.csrf.CsrfViewMiddleware.process_view') @patch('djoser.views.settings.PERMISSIONS.user_delete') def test_login_user_with_other_session_opened_with_3_users(self, mock_user_delete, mock_process_view): mock_process_view.return_value = None mock_user_delete.return_value = True logger.debug("user created") first_user = self.client.post(reverse('rest_framework:login'),data={"username":self.first_user.email, "password":"1am_th30nE"}) second_user = self.client.post(reverse('rest_framework:login'),data={"username":self.second_user.email, "password":"1am_th30nE"}) third_user = self.client.post(reverse('rest_framework:login'),data={"username":self.third_user.email, "password":"1am_th30nE"}) first_user = first_user.client.get(reverse('v1:auth-me')) second_user = second_user.client.get(reverse('v1:auth-me')) third_user = third_user.client.get(reverse('v1:auth-me')) self.assertEqual(first_user.wsgi_request.user.email, self.first_user.email) self.assertEqual(second_user.wsgi_request.user.email, self.second_user.email) self.assertEqual(third_user.wsgi_request.user.email, self.third_user.email) this returns a failure in all the cases bc the email in the wsgi request is always the mail of the second user i dont have any idea about this is happening -
Django store variable betrween user requests
Short version: How do one store a variable in memory in django and make it shareable between different users ? Long story: I've written some api using django, and django-ninja This api uses third party library that helps me to connect to telegram account (library is called Pyrogram) This library uses some variable and somehow connects to telegram: I use it like this: async with Client(session_name, config['api_id'], config['api_hash']) as app: DO something So I want to store this app variable in memory and share between all users. (I need this because otherwise pyrogram creates another session on user requests and there are limited number of pyrogram sessions) -
Django tutorial - no tables created?
In the Django tutorial "Writing your first Django app", part 2 starts off with Database Setup. I'm supposed to run a command to make tables, then it says "if I'm interested", I can use sqlite3 terminal to see those tables. Unfortunately, I can't see anything. It has me run 'py manage.py migrate', which it tells me will create tables for the default applications (admin, auth, contenttypes, sessions, messages, staticfiles). When I use the sqlite3 terminal, I'm able to see a database (.databases returns {main: "" r/w}), but no tables (.tables returns nothing, just goes to the next terminal entry line). I'm doing .cd [to the directory where the db.sqlite3 file is], then running .tables on the next line. Am I doing something wrong with either seeing the tables in sqlite3 terminal or creating them in with the django migrate command? Any help in making sense of this mismatch in what I see vs what the tutorial tells me I should see would be much appreciated. -
How to Customize and Install Tutor OpenEdX Micro-Frontend (MFE) with Example for frontend-app-authn?
I'm working on a Tutor OpenEdX instance and want to customize and install a Micro-Frontend (MFE). Specifically, I'm focusing on the frontend-app-authn MFE. I'm relatively new to working with OpenEdX MFEs and need some guidance on the best practices for customizing and installing an MFE within Tutor. Installed Tutor: I've successfully set up Tutor and the OpenEdX platform. Cloned the MFE repository: I've cloned the frontend-app-authn repository from GitHub. Basic Customization: I made some basic customizations in the cloned MFE repository (e.g., modifying CSS and text). Mounting the MFE: How do I correctly mount the customized frontend-app-authn MFE into my Tutor OpenEdX instance? Is there a specific configuration file or command that I need to modify or run? Building the MFE: What are the steps to build the customized MFE so that it can be deployed? Do I need to use a specific build command or tool? -
Sentry with django integration throws KeyError request
I am going nuts with this issue which I believe is coming from sentry-sdk for python, probably in combination with some other dependencies. I have a project in Django 4.2 with the sentry-sdk 2.13.0 which throws a KeyError request each time something is triggered on sentry, meaning if an error is reported I get X more errors reported about KeyError request (yes it is not just 1 for each report done). What confuses me is that the original error is shown correctly on sentry so I cannot say that sentry-sdk is failing reporting the issue. Moreover this happens also whenever I manually trigger a info/warning report from code to sentry. To note, this started happening after upgrading the project dependencies, especially Django to 4.2. I also thought it could have been a context_processor issue but context_processor for request is in the settings (see code below) Any help or suggestion is much appreciated. This is my settings for sentry: import sentry_sdk from sentry_sdk.integrations.django import DjangoIntegration sentry_sdk.init( dsn="XXXX", integrations=[DjangoIntegration()], server_name='XXXX', send_default_pii=True ) And this is the stacktrace for the error: KeyError 'request' django/template/context.py in __getitem__ at line 83 cms/templatetags/cms_tags.py in _get_empty_context at line 636 cms/templatetags/cms_tags.py in get_context at line 829 cms/templatetags/cms_tags.py in … -
Django using update() on bool column as an application operation lock
My Django app has an operation (syncing data from an external system) that I want to restrict from occuring twice simultaneously. For context, the data sync can occur due to a periodically scheduled task or due to the user manually/explicitly requesting a sync. If 2 sync jobs occur concurrently, the database may have an undesired state (and even trigger undesired effects in the application as a result). In the event that multiple syncs attempts occur concurrently, I would like 1 to succeed and the others to fail. The solution I have in mind is to have a boolean column called is_syncing on my Tenant model, since the data and logic for syncing pertains to the specific tenant. The idea is that before attempting an update, I would call Tenant.objects.filter(id=tenant_id, is_syncing=False).update(is_syncing=True) which will return the number of rows affected. It should return 1 if is_syncing was previously False, meaning that the sync operation may occur. Or it may return 0, meaning a sync has already begun and the application should abort/fail the sync attempt. My questions are Will this approach avoid race conditions and guarantee no concurrent syncs? Is there a better way of going about this? I am using Djangos … -
Django Obtaining access path error for static resources
I used Django to configure a default avatar path in static/images, but the system always looks for the default avatar file in media/images. models.py class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) nickname = models.CharField(max_length=100) avatar = models.ImageField(upload_to='avatars/',null=True, blank=True,default='images/默认头像.jpeg') settings.py MEDIA_URL = '/media/' MEDIA_ROOT = os.path.join(BASE_DIR,'media') STATIC_URL = '/static/' STATICFILES_DIRS = [ os.path.join(BASE_DIR, 'static'), ] urls.py urlpatterns = [ path('admin/', admin.site.urls), path('',include('myapp.urls')), ] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT) profile.html <img src="{% if request.user.profile.avatar %}{{ request.user.profile.avatar.url }}{% else %}{% static "images/默认头像.jpeg" %}{% endif %}" alt="头像" class="avatar-img"> media path:/Users/cai.wang/PycharmProjects/myweb/media default image path:/Users/cai.wang/PycharmProjects/myweb/static/images/默认头像.jpeg Log information Not Found: /media/images/默认头像.jpeg "GET /media/images/%E9%BB%98%E8%AE%A4%E5%A4%B4%E5%83%8F.jpeg HTTP/1.1" 404 4224 It should have gone to static/images to get the image instead of going to media/images -
How to keeping data field/attribute/column names consistent throughout a project?
In any Python project or in other language we need to use same fields name at various places throughout the project processing pipeline. Such requirement is omnipresence and often leads to errors, debugging time and code breaking at runtime. So, how we can keep the fields/attributes/column names consistent throughout the project. (to avoid typos, mismatch names etc) Let understand the problem with an example: Suppose, my project involves using data scraping for data collection and then store and serve it through back-end infrastructure (web application or API). Now, if, i use Scrapy for data scraping then i have to create Item() with required data fields (suppose: book_title, author, pages and price), later same fields name should be used in spiders for data scraping which may get saved as JSON . The json file may be dump to a database directly (to populate data for an application) then the table/s needs to be created in database , possibly with same names to avoid errors. Similarly, if Django is used as backend, then the model and serializer will also have these fields name, so again we need to repeat same fields names. if we want to do some data operation, suppose with … -
Django Viewflow - Passing field values via urls upon process start
Is it possible **, to pass a value to a process via the startup url/path. I have a process model with a note field. I want to start a new process flow and pass the note to the url e.g. http://server.com/my_process/start/?note=mynote -
Getting version conflicts when trying to add dependencies in my django-react project
Actually i am trying to add some more dependencies like the react-pdf etc. in my project which is a django project and i am trying to use react elements too along with it. I am using the webpack to present my react code to my django using npm build and start webpack commands, which is giving me a build file which is accepted my django project. So far I know that the react elements (packages and dependencies ) are dependent on a package.json file which gives a list of the files that are required or installed in the system according to requirements . If i use the " npm install" than it searches for the package.json file and than the package-lock.json file to install the required mentioned packages and dependencies. Now, I have even removed the node_modules file (which contains all the installed dependencies) and the package-lock.json file too, and just have the package.json file , if i try to install the dependencies now , than it giving me a no. of version conflicts among the packages/library names listed in package.json file . The main issue is the project is already running without any errors in deployment and now when … -
How can we track the location of animals and track its movement through django by using gps tracking tools that will be attached to body in animals
I am thinking of tracking animals by placing chips in their body which are present in my farm. I am thinking of using python django for viewing the animal location using googlemap or leaflet.js I haven't tried it i am looking for other responses I am totally new in this field -
Django connecting localhost PgSQL but not to remote DB (AWS RDS)
I have a Django project which is using the PostgreSQL as DB service. In local DB, everything is working fine. But when I try to connect to PostgreSQL instance in AWS RDS, it does not connect. The credentials to connect are correct and I'm able to connect to DB using command line, Dbeaver (a DB tool) and TablePlus (another DB tool) Local DB configs DATABASES = { "default": { "ENGINE": 'django.db.backends.postgresql', "NAME": 'my_db_name', "USER": 'my_user', "PASSWORD": 'my_password', "HOST": 'localhost', "PORT": '5432', } } Remote DB configs DATABASES = { "default": { "ENGINE": 'django.db.backends.postgresql', "NAME": 'my_db_name', "USER": 'my_user', "PASSWORD": 'my_password', "HOST": 'xxxxx.xxxxxx.us-east-2.rds.amazonaws.com', "PORT": '5432', } } Am I missing anything here? -
How to delete a Django model file without restarting the server and trigger migrations?
I'm working on a Django application where I need to programmatically delete a model file (e.g., my_app/models/my_model.py) and then run migrations, all without restarting the server. However, after deleting the file and running makemigrations, Django doesn't recognize any changes and doesn't generate a migration to remove the model. I've tried reloading the modules using importlib.reload(), but that hasn't resolved the issue. The system only detects the changes after I restart the server. Does Django or Python keep references to the model elsewhere, preventing the changes from being registered immediately? Is there a way to resolve this without having to restart the server? file_path = 'my_app/models/my_model.py' if os.path.exists(file_path): os.remove(file_path) #Update __init__.py init_file_path = 'my_app/models/__init__.py' with open(init_file_path, 'r') as file: lines = file.readlines() with open(init_file_path, 'w') as file: for line in lines: if 'my_model' not in line: file.write(line) module_name = 'my_app.models' if module_name in sys.modules: del sys.modules[module_name] module = importlib.import_module(module_name) importlib.reload(module) call_command('makemigrations', 'concrete_models') call_command('migrate', 'concrete_models') -
Django Celery is not updating database and S3
I am building a django project that has multiple apps. For one of the apps, which processes the latest data (uploaded), I am going to use celery since it it taking sometime (around 20-30 seconds) to retrieve and fetch data from AWS s3, process the data, create new instance of model on db (PostgreSQL) and upload it back to processed folder on s3. Currently it is working well on localhost without celery, but heroku's timeout is 30 seconds, therefore I am going to use celery. (Now I am testing it all on my localhost, no heroku involved, for information.) My current code is as follows and working correctly without error: On tasks.py (in processes app folder): from django.contrib.auth.models import User from functions.income_processor import IncomeProcessor from functions.expense_processor import ExpenseProcessor from utils.s3_utils import get_static_data, get_latest_income_data, get_latest_expense_data, save_processed_data def process_income_task(user_id): user = User.objects.get(id=user_id) try: static_data = get_static_data() income_data = get_latest_income_data(user) if income_data is None: raise ValueError("No income data available") process = IncomeProcessor(static_data, income_data) process.process() final_df = process.get_final_df() save_processed_data(user, final_df, 'INCOME') except Exception as e: print(f"Task failed: {str(e)}") raise e On views.py: @login_required def initiate_income_process(request): process_income_task(request.user.id) return render(request, 'processes/processing_started.html', {'process_type': 'income'}) def display_income(request): user = request.user latest_processed_data = ProcessedData.objects.filter(user=user, data_type='INCOME').order_by('-upload_date').first() print(f"Latest processed data: {latest_processed_data.filename … -
Django model returns count() 0, but raises IntegrityError on single entry bulk_create/save in testcase
I have a testcase for re-calculation of values in the my database. The flow is: Initial calculation (entries added to multiple models/tables) Reset data (delete related calculated data) Re-calculate (entries added to multiple models/tables) So my testcase looks something like this: assert DataModel1.objects.count() == 0 assert DataModel2.objects.count() == 0 plan = prepare_plan() calculation_manager = CalculationManager() # initial calculation calculation_manager.schedule(plan) # confirm data populated self.assertTrue(DataModel1.objects.count()) self.assertTrue(DataModel2.objects.count()) perform_reset(plan) # confirm data deleted assert DataModel1.objects.count() == 0 assert DataModel2.objects.count() == 0 # re-calculate calculation_manager.schedule(plan) On the re-calculation calculation_manager.schedule(plan) attempt, an IntegrityError - duplicate key value violates constraint error is raised for DataModel2 on bulk_create, despite deleting and confirming DataModel2.objects.count() == 0 and confirming that bulk_create is passed only a single entry. (Also tried replacing bulk_create to save and result was the same) Within the perform_reset(plan) function and the calculation_manager.schedule(plan) functions/methods are atomic() blocks where results are committed to the database, or items deleted. -- I've tried switching between using TestCase/TransactionTestCase with no difference in results. I have determined that performing a manual SQL table truncation after the perform_reset(plan) results in the testcase proceeding as expected, but this shouldn't be needed. Why when performing delete in the testcase, and django returning a 0 count, … -
After switching from a public IP to an Elastic IP in AWS, my API page is no longer accessible, and Nginx is stuck on the "Welcome to nginx!" page
I initially set up an API on an Ubuntu virtual environment using Nginx, Gunicorn, and Supervisor, and it was working fine with the original public IP address. However, I decided to attach an Elastic IP to the instance. After attaching the Elastic IP, the API page became inaccessible, throwing 404 Not Found. Steps Taken: Updated all configurations that referenced the old public IP address to use the new Elastic IP, including: gunicorn.conf, settings.py in Django and Restarted Nginx to apply changes, etc. Configration: AWS Security groups [Port range, Protocol, Source]: [22, TCP, 0.0.0.0/0], [80, TCP, 0.0.0.0/0], [80, TCP, ::/0] Questions: What could be causing Nginx to display the "Welcome to nginx!" page instead of serving my API? -
Django admin option to log in as normal user
in my Django project i need to give an admin option to log in as different user from admin site. I am trying to achieve specific behaviour - admin can log in as different user on a new tab or window and will also be logged in as admin on the original browser tab. Is this possible ? Can someone advise me how to achieve such behaviour ? Any help would be appreciated. 🙏 def impersonate_link(self, obj): impersonate_url = reverse('impersonate_user', args=[obj.id]) return format_html( '<a href="{}" onclick="window.open(\'{}\', \'_blank\', \'width=800,height=600\'); return false;">Log in as</a>', impersonate_url, impersonate_url ) I created this link in admin.py to show it next to each user record.The link leads to impersonate_user view in views.py. In the view admin just logs in as desired user, but also logs out of admin account. @staff_member_required def impersonate_user(request, user_id): user = get_object_or_404(CustomUser, id=user_id) login(request, user) return redirect('home') I get why admin is being logged out of his account, but I really need help with achieving behaviour mentioned above.