Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
Data not saved to database
I don't know what's going on here, whenever I try to create a buzz with the status set to Draft (this is also the default status set on the model) Django returns a 302 response and does not saved it to the database. However, when I change the status to Published it just saves it normally on the database. Here's the code to the view def buzz_create(request): form = BuzzCreateForm() if request.method == 'POST': form = BuzzCreateForm(data=request.POST) if form.is_valid: buzz = form.save(commit=False) buzz.author = request.user buzz.save() return redirect(to=reverse('buzz:buzz_list')) return render( request=request, template_name='buzz/create.html', context={ 'form': form } ) Here's the code to the model: class BuzzPublishedManager(models.Manager): def get_queryset(self): return ( super().get_queryset().filter(status=Buzz.Status.PUBLISHED) ) class Buzz(models.Model): class Status(models.TextChoices): PUBLISHED = 'PBL', 'Published' DRAFT = 'DFT', 'Draft' title = models.CharField(max_length=250) body = models.TextField() slug = models.SlugField(max_length=250, unique_for_date='publish') publish = models.DateTimeField(default=timezone.now) created = models.DateTimeField(auto_now_add=True) updated = models.DateTimeField(auto_now=True) author = models.ForeignKey( to=settings.AUTH_USER_MODEL, on_delete=models.CASCADE, related_name='buzzes' ) status = models.CharField( max_length=3, choices=Status, default=Status.DRAFT ) published = BuzzPublishedManager() objects = models.Manager() class Meta: verbose_name_plural = 'Buzzes' ordering = ['-publish'] indexes = [ models.Index(fields=['-publish']) ] def __str__(self): return self.title The objects in the model was not there before. I tried adding it maybe the data is saved but not queried … -
Django Allauth's Google Login Redirect and Page Design
Currently, on the login page, I have a button: <div class="d-grid gap-2"> <a href="{% provider_login_url 'google' %}" class="btn btn-danger"> <i class="fab fa-google"></i> Sign in with Google </a> </div> This redirects to accounts/google/login/, and that page allows for redirection to Google authentication. I have two problems: I don't know if these two steps are necessary and I don't see the value of having the extra step accounts/google/login/. I don't know how to replace the standard layout of the accounts/google/login/ page (in case it is really needed). -
Access request session data of DetailView in CreateView in django
I am writing a library management system in Django. There are two views that I am having a bit of a struggle. The BookDetailsView lists the details of a Book such as title, price, etc. class BookDetailsView(LoginRequiredMixin, DetailView): model = Book template_name = 'book_detail.html' def get(self, request, *args, **kwargs): response = super().get(request, *args, **kwargs) request.session['book_pk'] = kwargs['pk'] return response # used to mark book as read or unread def post(self, request, *args, **kwargs): if 'is_read' in request.POST: book = Book.objects.get(pk=kwargs['pk']) book.is_read = True book.save() return HttpResponseRedirect(self.request.path_info) In the BookBorrowView, I display a form where the reader can borrow a book. Two fields are preset (borrowers and book), and I don't want the user to be able to change them. At the moment, the user can select among many options. class BookBorrowView(LoginRequiredMixin, CreateView): model = BookBorrowTransaction template_name = 'book_borrow.html' fields = ['book', 'borrowers', 'date_borrowed', 'to_return', ] success_url = reverse_lazy('home') def get_initial(self): initial = super(BookBorrowView, self).get_initial() initial['borrowers'] = get_object_or_404(CustomUser, email=self.request.user.email) initial['book'] = get_object_or_404(Book, title=Book.objects.get(pk=self.request.session['book_pk']).title) # need the book id here print(self.request.GET) print(self.request.POST) print(self.request.session['book_pk']) return initial The following is a screenshot of the form displayed by the BookBorrowView. I have two questions: I am passing the primary key for the book through request.session … -
Export command not found in working django import-export app
I'm trying to reproduce the export command as shown in the import-export docu python manage.py export CSV auth.User yet all I get is: Unknown command: 'export'. Type 'manage.py help' for usage. Besides the management command, the import-export api works fine -
Django: Unable to login into a account
I have created a custom user model in my django model where the password is being saved using make_password from password_strength but when trying to login using the check_password it says invalid username or password. @csrf_exempt def login_attempt(request): if request.method == 'POST': try: data = json.loads(request.body) email = data.get('email') password = data.get('password') try: user_obj = user.objects.get(email=email) except user.DoesNotExist: return JsonResponse({'success': False, 'message': "Email doesnot exists"}, status=401) if check_password(password, user_obj.password): login(request, user_obj) return JsonResponse({'success': True, 'message': "Login successful"}, status=200) else: return JsonResponse({'success': False, 'message': "Invalid email or password"}, status=401) except Exception as e: return JsonResponse({'success': False, 'message': f"Error: {str(e)}"}, status=500) return JsonResponse({'success': False, 'message': "Invalid request method"}, status=405) -
SAP Connection failed: name 'Connection' is not defined, PYRFC in django
I'm experiencing an issue with the pyrfc library in my Django project running on Pythonanywhere server. Specifically, I am trying to use the Connection class from pyrfc to establish a connection to an SAP system, but I am encountering an ImportError when I try to import Connection in my views.py file. The error message says: SAP Connection failed: name 'Connection' is not defined However, when I test the same code in the Django shell, everything works fine, and the Connection class is correctly imported. I have verified that the environment variables SAP_NWRFC_HOME and LD_LIBRARY_PATH are set correctly, and the libsapnwrfc.so library loads successfully. If i do make import like from pyrfc import Connection then it gives the error 2025-01-22 07:28:52,598: Error running WSGI application 2025-01-22 07:28:52,599: ImportError: cannot import name 'Connection' from 'pyrfc' (/home/moeez007/.local/lib/python3.10/site-packages/pyrfc/__init__.py) 2025-01-22 07:28:52,599: File "/var/www/moeez007_pythonanywhere_com_wsgi.py", line 80, in <module> 2025-01-22 07:28:52,600: from pyrfc import Connection i also have tried to set the environment variables in the wsgi file import os os.environ["SAP_NWRFC_HOME"] = "/home/moeez007/nwrfcsdk" os.environ["LD_LIBRARY_PATH"] = "/home/moeez007/nwrfcsdk/lib" but still the same, i also have tried to run this as a separate file but still the same why is it working in the django shell and python but not … -
How to configure Nginx to serve Django and a Wordpress site on a specific route?
Good day good people of SO! I have a Django app on a Hetzner server and a Wordpress site on a Hostinger server. I want to configure Nginx on my Hetzner server to serve the Django app and when it requests the /route-name route, it serves a Wordpress site from the Hostinger server. I've already allowed the IP address of my Hetzner server to access the Wordpress site. Hetzner server is running Nginx and Hostinger is running Apache2 if that's relevant. I know there are quite a few questions and answers regarding configuring Nginx to serve Django and Wordpress but I've been scouring this forum and the Internet for several hours now with no solution found for my problem. I suspect it may have to do with my Nginx config, but after trying out several configurations accepted on here and elsewhere, I can't seem to make it work. This is what I currently have for my Nginx config: server { location = /favicon.ico { access_log off; log_not_found off; } location /static/ { alias /var/www/example.com/static/; } location /media/ { alias /var/www/example.com/media/; } # Reverse proxy for /freebies location /route-name/ { proxy_pass http://<IP-ADDRESS OF HOSTINGER SERVER>/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; … -
Failed to start gunicorn.socket: Unit gunicorn.socket has a bad unit file setting
** [Unit] Description=gunicorn daemon Requires=gunicorn.socket After=network.target [Service] User=evanys Group=www-data WorkingDirectory=/home/evanys/www/plus ExecStart=/home/evanys/www/django22/bin/gunicorn \ --access-logfile - \ --workers 3 \ --bind unix:/home/evanys/plus.sock \ plus.wsgi:application [Install] WantedBy=multi-user.target ** Hola al momento de ejecutar *>sudo systemctl start gunicorn.socket* sale el error *>Failed to start gunicorn.socket: Unit gunicorn.socket has a bad unit file setting.* agradecere alguna solucion -
How to create a Django custom field with different representations in admin, database, and Python code?
I want to create a custom model field in Django that can store values in bytes in the database but allows users to interact with it in gigabytes (GB) in the Django admin interface. The goal is for the field to handle all necessary conversions seamlessly. Specifically, the field should accept input in GB when adding or editing a record in the admin, convert it to bytes before saving it to the database, and then retrieve it in bytes for use in Python code. I’ve started working on this by overriding methods like to_python, get_prep_value, from_db_value, and formfield, but I’m not entirely sure how to structure these methods to ensure the field behaves as intended. Here is what I already have: class GigabyteField(models.BigIntegerField): def to_python(self, value): if value is None: return value try: return int(value) except (TypeError, ValueError): raise ValueError(f"Invalid value for GigabyteField: {value}") def get_prep_value(self, value): if value is None: return None return int(value * (1024**3)) def from_db_value(self, value, *args): if value is None: return value return int(value / (1024**3)) -
linux ssh key is available to wsl2 django launched by Windows PyCharm
I am working in a development environment for my Django application where I have PyCharm installed on windows, and I launch django in wsl2. This has been working for me flawlessly for a while now. However just now I have a new need to have an ssh key for 3rd party command line tool my application invokes. I created the key in linux and I can execute the command successfully in a linux terminal, however when my django application runs the command fails with the same error it failed with prior to creating my ssh key. At one point I was 100% confident I had found a workaround of starting ssh-agent and then adding an environment variable to my run configuration to set SSH_AUTH_SOCK to the value of SSH_AUTH_SOCK from my terminal, however over the course of trying to find an actual acceptable solution, I can no longer reproduce this, so I'm unsure if that ever even worked or it worked for another reason that I was not aware of. I can confirm that in this scenario my SSH_AUTH_SOCK does have the correct value in django. Every time the command requiring the ssh key is executed I get an error … -
django s2forms.ModelSelect2Widget not works properly
hi all I’m trying using ModelSelect2Widget I set redis server which I test and it works. then I set the following project: models.py class Doctor(models.Model): user=models.OneToOneField(User,on_delete=models.CASCADE) status=models.BooleanField(default=True) def __str__(self): return "{} ({})".format(self.user.first_name,self.department) class Patient(models.Model): user=models.OneToOneField(User,on_delete=models.CASCADE) assignedDoctorId = models.ForeignKey(Doctor, on_delete=models.CASCADE,related_name='doctor_assigned') admitDate=models.DateField(auto_now=True) status=models.BooleanField(default=False) def __str__(self): return self.user.first_name form.py class BaseAutocompleteSelect(s2forms.ModelSelect2Widget): class Media: js = ("admin/js/vendor/jquery/jquery.min.js",) def __init__(self, **kwargs): super().__init__(kwargs) self.attrs = {"style": "width: 300px"} def build_attrs(self, base_attrs, extra_attrs=None): base_attrs = super().build_attrs(base_attrs, extra_attrs) base_attrs.update( {"data-minimum-input-length": 10, "data-placeholder": self.empty_label} ) return base_attrs class DoctorAutocompleteWidget(BaseAutocompleteSelect): empty_label = "-- select doctor --" search_fields = ("username__icontains",) queryset=models.Doctor.objects.all().filter(status=True).order_by("id") class PatientForm(forms.ModelForm): assignedDoctorId=forms.ModelChoiceField(queryset=models.Doctor.objects.all().filter(status=True), widget=DoctorAutocompleteWidget) but results is an empty list enter image description here while using assignedDoctorId=forms.ModelChoiceField(queryset=models.Doctor.objects.all().filter(status=True),empty_label="Name and Department") it show me list but I would like use select2 in order to user redis and the search bar I would like create select and multiselect menu with searchbar to change list value: in the future I would like the same with table list and change a dropdown menu option if user insert a string in an input module or select an option from one other dropdown menu -
I got 'IncompleteSignature' issue in Aliexpress open platform
I am working over Django using python. I am working on the official SDK for python of Aliexpress. I am trying to get the ACCESS TOKEN from Aliexpress. I got 'IncompleteSignature' issue which means 'The request signature does not conform to platform standards as the part of the response body. Here is full results : {'error_response': {'type': 'ISV', 'code': 'IncompleteSignature', 'msg': 'The request signature does not conform to platform standards', 'request_id': '2141154c17373626146733360'}} My code is very simple because I referred the sample code of their site. (https://openservice.aliexpress.com/doc/api.htm#/api?cid=3&path=/auth/token/create&methodType=GET/POST) Here is my code: def callback_handler(request): code = request.GET.get('code') url = "https://api-sg.aliexpress.com/sync" appkey = "123456" appSecret = "1234567890XXXX" client = iop.IopClient( url, appkey, appSecret, ) request = iop.IopRequest('/auth/token/create') request.add_api_param('code', code) response = client.execute(request) response_type = response.type response_body = response.body print(response_type) print(response_body) return HttpResponse(f"Response type: {response_type}, Response body: {response_body}") I posted the question to the Aliexpress console but they replied with a very vague answer and a Java script reference code. I was shocked. It was not even python. And this advice can be only implemented if I modified their SDK itself. I am not sure if the python SDK is of practical working quality. Since I have been spending too much time and … -
How to Include a Message Field in Structlog Logs and Best Practices for ElasticSearch Integration
I'm working on a Django project where logging is critical, and I'm using structlog to format and manage logs. The plan is to send these logs to ElasticSearch. However, I've encountered an issue: the logs are missing the "message" field, even though I explicitly pass a message in the logger call. Here’s the log output I currently get: { "code": 200, "request": "POST /api/push-notifications/subscribe/", "event": "request_finished", "ip": "127.0.0.1", "request_id": "d0edd77d-d68b-49d8-9d0d-87ee6ff723bf", "user_id": "98c78a2d-57f1-4caa-8b2a-8f5c4e295f95", "timestamp": "2025-01-21T10:40:43.233334Z", "logger": "django_structlog.middlewares.request", "level": "info" } What I want is to include the "message" field, for example: { "code": 200, "request": "POST /api/push-notifications/subscribe/", "event": "request_finished", "ip": "127.0.0.1", "request_id": "d0edd77d-d68b-49d8-9d0d-87ee6ff723bf", "user_id": "98c78a2d-57f1-4caa-8b2a-8f5c4e295f95", "timestamp": "2025-01-21T10:40:43.233334Z", "logger": "django_structlog.middlewares.request", "level": "info", "message": "push notification subscribed successfully" } Here’s my current setup: settings.py Logger Configuration LOGGING = { 'version': 1, 'disable_existing_loggers': False, "formatters": { "json_formatter": { "()": structlog.stdlib.ProcessorFormatter, "processor": structlog.processors.JSONRenderer(), }, "plain_console": { "()": structlog.stdlib.ProcessorFormatter, "processor": structlog.dev.ConsoleRenderer(), }, "key_value": { "()": structlog.stdlib.ProcessorFormatter, "processor": structlog.processors.KeyValueRenderer(key_order=['timestamp', 'level', 'event', 'message']), }, }, 'handlers': { "console": { "class": "logging.StreamHandler", "formatter": "plain_console", }, "json_file": { "level": "INFO", "class": "logging.handlers.RotatingFileHandler", "filename": "logs/ft_json.log", "formatter": "json_formatter", "maxBytes": 1024 * 1024 * 5, "backupCount": 3, }, "flat_line_file": { "level": "INFO", "class": "logging.handlers.RotatingFileHandler", "filename": "logs/flat_line.log", "formatter": "key_value", "maxBytes": 1024 * 1024 * … -
Having problem sending data between 2 scope classes in Django Channels
I am using django channels for the first time and can't wrap my head around something. Here is what I am trying to achieve; I want to create a new message in ChatConsumer which is all good and fine. Problem occurs when i try to pass the id of the chat that new message was created in. I don't get any errors or feedback or nothing. It just fails silently. Here is the code base class ChatConsumer(WebsocketConsumer): """ On initial request, validate user before allowing connection to be accepted """ #on intial request def connect(self): self.room_name = self.scope['url_route']['kwargs']['room_name'] self.room_group_name = f'chat_{self.room_name}' async_to_sync(self.channel_layer.group_add)( self.room_group_name, self.channel_name ) # accept connection self.accept() # send response self.send(text_data=json.dumps({ 'type':'connection_established', 'message':'You are connected' })) def receive(self, text_data): # get data sent from front end text_data_json = json.loads(text_data) # message message = str(text_data_json['form']['message']) if message.strip() == "": message = None # try to decode image or set to none if not available try: base64_image = text_data_json['form']['image'] if base64_image.startswith('data:image'): base64_image = base64_image.split(';base64,')[1] img_name = random.randint(1111111111111,999999999999999) data = ContentFile(base64.b64decode(base64_image), name= 'image' + str(img_name) + '.jpg') except AttributeError: data = None # send message sender = self.scope['user'] # extract chat ID chat_id = int(self.scope['url_route']['kwargs']['room_name']) try: _chat = Chat.objects.get(id = chat_id) … -
Override existing custom Django App template tags
I have an application that uses Weblate to manage translations. I use weblate/weblate Docker image, with my own customizations built as a separate Python package extending this image and built on top of it. The problem is that in the Weblate HTML templates there is an icon template tag that is supposed to load SVG icons from a STATIC_ROOT or a CACHE_DIR location - but my application runs in a serverless setup and as such offloads all of the static resources to a S3 bucket. For most of the resources it works fine, but due to that template tag logic the icons are not loaded and I get these error messages - weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,913: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/weblate.svg' weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,918: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/wrench.svg' weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,919: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/plus.svg' weblate-1 | gunicorn stderr | [2025-01-21 12:41:08,923: WARNING/1540] Could not load icon: FileNotFoundError: [Errno 2] No such file or directory: '/app/cache/static/icons/dots.svg' I wrote my custom … -
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (Pillow)
I am making a website with django on vscode. I want to add a field containing images and other files. I did some research and it needs django-anchor installed. I installed it but got an error. Collecting django-anchor Using cached django_anchor-0.5.0-py3-none-any.whl.metadata (6.9 kB) Requirement already satisfied: django<6,>=4.2 in c:\msys64\ucrt64\lib\python3.10\site-packages (from django-anchor) (5.1.4) Collecting pillow<12,>=9.5 (from django-anchor) Using cached pillow-11.1.0.tar.gz (46.7 MB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: asgiref<4,>=3.8.1 in c:\msys64\ucrt64\lib\python3.10\site-packages (from django<6,>=4.2->django-anchor) (3.8.1) Requirement already satisfied: sqlparse>=0.3.1 in c:\msys64\ucrt64\lib\python3.10\site-packages (from django<6,>=4.2->django-anchor) (0.5.3) Requirement already satisfied: tzdata in c:\msys64\ucrt64\lib\python3.10\site-packages (from django<6,>=4.2->django-anchor) (2024.2) Requirement already satisfied: typing-extensions>=4 in c:\msys64\ucrt64\lib\python3.10\site-packages (from asgiref<4,>=3.8.1->django<6,>=4.2->django-anchor) (4.12.2) Using cached django_anchor-0.5.0-py3-none-any.whl (7.6 kB) Building wheels for collected packages: pillow Building wheel for pillow (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for pillow (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [209 lines of output] running bdist_wheel running build running build_py creating build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\BdfFontFile.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\BlpImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\BmpImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\BufrStubImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\ContainerIO.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\CurImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\DcxImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\DdsImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\EpsImagePlugin.py -> build\lib.mingw_x86_64_ucrt-cpython-310\PIL copying src\PIL\ExifTags.py -> … -
User Manytomany field add data got error maximum recursion depth exceeded while calling a Python object
I have this conversation model which have manytomany field to user model. Issue is when i try add participants in conversation i got error = RecursionError: maximum recursion depth exceeded class Conversation(BaseModel): name = models.CharField(max_length=255, blank=True, null=True) is_group = models.BooleanField(default=False) participants = models.ManyToManyField(User, related_name="conversations", null=True) class Meta: db_table = "conversation" i have this mixins.py class DestroyWithPayloadMixin(object): def destroy(self, *args, **kwargs): super().destroy(*args, **kwargs) return response.Response( { "msg": "Record deleted successfully" }, status=status.HTTP_200_OK ) class ModelDiffMixin(object): """ A model mixin that tracks model fields' values and provide some useful api to know what fields have been changed. """ def __init__(self, *args, **kwargs): super(ModelDiffMixin, self).__init__(*args, **kwargs) self.__initial = self.to_dict @property def diff(self): d1 = self.__initial d2 = self._dict diff_dict = {key: {'previous': value, 'current': d2[key]} for key, value in d1.items() if value != d2[key]} return diff_dict @property def changed_fields(self): return self.diff.keys() def get_field_diff(self, field_name): """ Returns a diff for field if it's changed and None otherwise. """ return self.diff.get(field_name, None) @property def to_dict(self): return model_to_dict(self, fields=[field.name for field in self._meta.fields]) -
How to Set Up Google Cloud ADC (Application Default Credentials) in Django on PythonAnywhere?
I'm trying to set up Google Cloud's Application Default Credentials (ADC) for my Django project on PythonAnywhere, but I keep encountering the following error: Error creating story: Your default credentials were not found. To set up Application Default Credentials, see https://cloud.google.com/docs/authentication/external/set-up-adc for more information. What I've Tried: Created a Service Account: Created a service account in Google Cloud and downloaded the JSON key file. Stored the file at: /home/footageflow/helloworld2003-754c20cfa98d.json. Set the GOOGLE_APPLICATION_CREDENTIALS Environment Variable: Added the following to .bashrc export GOOGLE_APPLICATION_CREDENTIALS="/home/footageflow/helloworld2003-754c20cfa98d.json" Tried Programmatic Credentials: Explicitly set the variable in my Django code import os os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "/home/footageflow/helloworld2003-754c20cfa98d.json" Attempted CLI Authentication: Installed the gcloud CLI on PythonAnywhere. Ran gcloud auth application-default login and authenticated successfully. enter code here Problem: Despite trying all these steps, the error persists when I run the code on PythonAnywhere. The same code works fine locally after authenticating with gcloud. My Questions: Is there something specific I need to configure for ADC to work on PythonAnywhere? Do I need to grant additional permissions to my service account in Google Cloud? Could the issue be related to how PythonAnywhere handles environment variables or service accounts? Additional Information: The Django project is running on PythonAnywhere. Locally, the project works … -
Django ORM filter on datetime understands entries with a 'Z' offset, but not '+0000' offset
I have a MySQL database that mirrors a salesforce database. My table has a column for "createddate" that is contains ISO-8601 datetime entries. Some of the entries have an offset of "Z" and some of the entries have an offset of "+0000" like this: 2025-01-20T17:18:18.000Z 2025-01-20T18:11:10.000Z 2025-01-20T17:27:55.000+0000 2025-01-20T17:29:46.000Z 2025-01-20T17:28:19.000+0000 When I attempt to filter on a certain date, the filter ONLY returns lines that have a "Z" offset. lines with "+0000" are not returned. My filter code looks like: receipts = Receipt.objects.filter(createddate__date='2025-01-20').count() As far as I can tell, both formats conform to ISO-8601. I do have USE_TZ set to true in my settings.py the field is configured in models.py like: createddate = models.CharField(db_column='CreatedDate', max_length=28, blank=True, null=True) Relatively new to django and its ORM, I'm currently working around with a raw SQL query but I'd rather do it natively if possible. -
Duplicate the content of the block in Django template
Suppose I have the following code in base.html: <meta name="description" content="{% block description %}Some description{% endblock %}"> Other templates that extend base.html, override the contents of the block and so each page gets its own description. What if I want to add <meta property="og:description" content="" />, where the value of the content should equal the above value? How can I add the contents of the block into another place? -
Javascript display raw JSON instead of rendering the HTML content like the "index" button
I'm developing a Django web application that allows users to follow each other and view a feed of posts from the users they follow. I have a button that triggers an AJAX request to a Django view designed to fetch and render the relevant posts. My Django view correctly retrieves the posts from users the currently logged-in user follows and returns them as a JSON response. However, when my JavaScript code receives this response, it doesn't render the posts into the page as HTML. Instead, the browser displays the raw JSON data. I expect the JavaScript to use the JSON data to update the page with rendered HTML content representing the posts, but this isn't happening. How can I ensure that my JavaScript correctly processes the JSON and renders the posts as HTML? thanks for helping ❤️ post.js: document.addEventListener('DOMContentLoaded', function() { // Use buttons to toggle between views const index = document.querySelector('#index') const following = document.querySelector('#following') if(index) { index.addEventListener('click', () => load_data('index')); } if (following) { following.addEventListener('click', () => load_data("following")); } // By default, load the index load_data('index'); }); function load_data(type) { console.log("type:", type); let url; if (type == "index") { url = "/post" } else if (type == "following") … -
How paginator active?
1 with bootstrap 5.3 when browsing not get active the paginator in the page attach imagen enter image description here I want to navigator this activated for each page. For example imagen enter image description here My Code <nav aria-label="Page navigation example"> <ul class="pagination"> {% if tipomaterial.has_previous %} <li class="page-item"><a class="page-link" href="?page={{ tipomaterial.previous_page_number }}">&laquo;</a></li> {% endif %} {% for page_number in tipomaterial.paginator.page_range %} {% if items_page.number == page_number %} <a class="page-link" href="?page={{ tipomaterial.page_number }}"> {{ page.number }} </a> {% else %} <li class="page-item" aria-current="page"> <a class="page-link" href="?page={{ tipomaterial.page_number }}"> {{ page.number }} </a> </li> {% endif %} {% endfor %} {% if tipomaterial.has_next %} <li class="page-item"><a class="page-link" href="?page={{ tipomaterial.next_page_number }}">&raquo;</a></li> {% endif %} </ul> </nav> -
s3 upload timeout on dockerized Digital Ocean setup
I have a S3 compatible storage and a Droplet server on Digital Ocean. The dockerized Django app I am running is trying to sync static assets to the storage. This fails from the Droplet server/Docker container, but not when accessing the same S3 storage from my local setup. I can also test the upload straight from the server (outside the dockerized app) and this works, too. So something about the Docker setup is making the S3 requests fail. I made a simple test case, in s3upload.py with foobar.txt present in the same directory: from boto3.s3.transfer import S3Transfer import boto3 import logging logging.getLogger().setLevel(logging.DEBUG) client = boto3.client('s3', aws_access_key_id="…", aws_secret_access_key="…", region_name="ams3", endpoint_url="https://ams3.digitaloceanspaces.com") transfer = S3Transfer(client) bucket_name = "…" transfer.upload_file("foobar.txt", bucket_name, "foobar.txt") The error I am seeing when calling this from the docker container is: Traceback (most recent call last): File "/usr/local/lib/python3.13/site-packages/boto3/s3/transfer.py", line 372, in upload_file future.result() ~~~~~~~~~~~~~^^ File "/usr/local/lib/python3.13/site-packages/s3transfer/futures.py", line 103, in result return self._coordinator.result() ~~~~~~~~~~~~~~~~~~~~~~~~^^ File "/usr/local/lib/python3.13/site-packages/s3transfer/futures.py", line 264, in result raise self._exception File "/usr/local/lib/python3.13/site-packages/s3transfer/tasks.py", line 135, in __call__ return self._execute_main(kwargs) ~~~~~~~~~~~~~~~~~~^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/s3transfer/tasks.py", line 158, in _execute_main return_value = self._main(**kwargs) File "/usr/local/lib/python3.13/site-packages/s3transfer/upload.py", line 796, in _main client.put_object(Bucket=bucket, Key=key, Body=body, **extra_args) ~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/botocore/client.py", line 569, in _api_call return self._make_api_call(operation_name, kwargs) ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^ … -
Django HTTP response always sets `sessionid` cookie and session data do not persist
I have created a custom backend and related middleware which log users in on the sole condition that an ID_TOKEN cookie is passed along with the request (authentication is done by AWS Cognito + Lambda Edge, managed by an AWS CouldFront). My code is extensively based on django.contrib.auth.backends.RemoteUserBackend and its related middleware middleware django.contrib.auth.middleware.RemoteUserMiddleware. While dealing with custom session data is working fine both locally and in a Docker container using runserver + unit tests do pass, I lose all session data in production (code running in a container on AWS ECS) from one request/response to another. From what I can see in my Firefox network tab, a set-cookie header is always sent with the HTTP response, causing session data to be lost. I guess they must be flushed as well on the back-end side (sessions use database store, production is running on gunicorn). I have set SESSION_COOKIE_SECURE = True in production but it did not solve the issue. Moreover, using django_extensions and its runserver_plus with an auto-generated certificate to use HTTPS locally as well did not allow me to reproduce the issue. Here is one set-cookie example: set-cookie sessionid=rlc...tn; expires=Mon, 03 Feb 2025 14:29:53 GMT; HttpOnly; Max-Age=1209600; Path=/; SameSite=Lax; … -
Django DB Connection Pool Shared Across Workers?
In Django, are DB connections in the Psycopg 3 DB Connection Pool shared between gevent gunicorn workers or does each worker spawn its own DB connection pool? https://docs.djangoproject.com/en/5.1/ref/databases/#connection-pool