Django community: RSS
This page, updated regularly, aggregates Django Q&A from the Django community.
-
how to access all buttons in every container
I want to assign an event listener for every button in every container, but I don't want to loop through the buttons first, because Firstly I want to access the container to use hidden inputs to be passed as parameter to every event listener. for(var i = 0 ; i<products.length;i++) { var id= products[i].getElementsByClassName('pr_id')[0].value; var price = products[i].getElementsByClassName('pr_price')[0].value; var Pname = products[i].getElementsByClassName('pr_name')[0].value var btn = products[i].getElementsByTagName('button')[0]; btn.addEventListener('click',function() { openpopUp(id,price,Pname) }) } but this way the parameters of every button function will be over written. In the end they all have the same parameter ( parameter of last item in the list). HTML code: <div> <div class="productCard"> <img src="'.$row['Picture'].'"> <h3><a href="productDetails.php?id='.$row['Product_ID'].'">'.$row['Name'].'</a></h3> <h4>'.$row['price'].'</h4> <Button type="button" class="open">aa</Button> <input type="hidden" class="pr_id" id="producId" value="'.$row['Product_ID'].'"> <input type="hidden" class="pr_name" name="productName" value="'.$row['Name'].'"> <input type="hidden" class="pr_price" name="productPrice" value="'.$row['price'].'"> </div> </div> -
How to copy a custom .scss file's .css output to wwwroot in a Blazor app?
I have a Blazor WASM application that I'd like to have separate .scss files used for global styling that are not component specific CSS isolation files. Those are bundled and available at runtime, so no issue there. I'm looking for getting the transpiled .css file from it's corresponding .scss source file, and have that .css file copied to wwwroot so I can link to it in index.html. To hedge this a bit, I understand there already exists in wwwroot an app.css file where additional global styles can be applied. The problem with this is it's a .css file and I'd like the power of SASS so this isn't the option I'd like to use. You also can't put a .scss file inside wwwroot as it's not proper because files that need transpiled need to be outside that folder and in the project itself; wwwroot is a target for static files to be hosted, not raw .scss source files. I'm leveraging LibSassBuilder to transpile my .scss files to .css so that's all taken care of. In this example I have a custom.css that's been transpiled from custom.scss and ready for use, but I need it copied to wwwroot. It's been a … -
How to get data from XBRL file into PHP
Hello i want to create a php page to read the file xbrl and extract some information. Anyone have experience from XBRL? I have tried to use the xml standar decode but not work -
dyld[67423]: missing symbol called on Intel Mac
I have this problem when building a site with 11ty. I'm not sure what happened for this error to pop. I've found that most of people with that problem talk about M1 Macs. I'm on a 2020 Intel Macbook Air. Not sure which info/log could be helpful. Error: dyld[67423]: missing symbol called -
How can you set the parallelism for a specific ingress for an embedded statefun application
I have a custom ingress which for various reasons should be run as a singleton. While I understand how the default parallelism can be set I do not see a way of controlling this for a specific operator or ingress when imbedded. I have searched the documentation but have yet to find anything related to this. -
Cannot read PDF Data into Sheets with Gspread-DataFrame
I want to read data from a PDF I downloaded using Tabula into Google Sheets, and when I transfer the data as it was read into Google Sheets, I get an error. I know the data I downloaded is dirty but I wanted to just clean it up in Google Sheets. Downloading Data from Pdf df = tabula.read_pdf(file_path, pages='all', multiple_tables='FALSE', stream='TRUE') print (df) [ Anderson 19,212 9,013 74 1,034 42 174 189 28 0 0.1 0 Bedford 11,486 3,395 25 306 8 47 75 5 0 0 1 Benton 4,716 1,474 12 83 13 11 14 2 0 0 2 Bledsoe 3,622 897 7 95 4 9 18 2 0 0 3 Blount 37,443 12,100 83 1,666 72 250 313 51 1 1 4 Bradley 29,768 7,070 66 1,098 44 143 210 29 1 1 5 Campbell 9,870 2,248 32 251 25 43 45 5 0 0 6 Cannon 4,007 1,127 8 106 7 18 29 3 0 0 7 Carroll 7,756 2,327 22 181 20 18 39 2 0 0 8 Carter 16,898 3,453 30 409 20 54 130 26 0 0 9 Cheatham 11,297 3,878 26 463 13 50 99 8 0 0 10 Chester 5,081 1,243 5 … -
Error with Spring Ecossystem and OpenFeign
Stack trace: java.lang.IllegalStateException: Error processing condition on org.springframework.cloud.openfeign.FeignAutoConfiguration.cachingCapability at org.springframework.boot.autoconfigure.condition.SpringBootCondition.matches(SpringBootCondition.java:60) ~[spring-boot-autoconfigure-2.7.5.jar:2.7.5] at org.springframework.context.annotation.ConditionEvaluator.shouldSkip(ConditionEvaluator.java:108) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForBeanMethod(ConfigurationClassBeanDefinitionReader.java:193) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitionsForConfigurationClass(ConfigurationClassBeanDefinitionReader.java:153) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.context.annotation.ConfigurationClassBeanDefinitionReader.loadBeanDefinitions(ConfigurationClassBeanDefinitionReader.java:129) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.context.annotation.ConfigurationClassPostProcessor.processConfigBeanDefinitions(ConfigurationClassPostProcessor.java:343) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.context.annotation.ConfigurationClassPostProcessor.postProcessBeanDefinitionRegistry(ConfigurationClassPostProcessor.java:247) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanDefinitionRegistryPostProcessors(PostProcessorRegistrationDelegate.java:311) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.context.support.PostProcessorRegistrationDelegate.invokeBeanFactoryPostProcessors(PostProcessorRegistrationDelegate.java:112) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:746) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:564) ~[spring-context-5.3.23.jar:5.3.23] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:147) ~[spring-boot-2.7.5.jar:2.7.5] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:734) ~[spring-boot-2.7.5.jar:2.7.5] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:408) ~[spring-boot-2.7.5.jar:2.7.5] at org.springframework.boot.SpringApplication.run(SpringApplication.java:308) ~[spring-boot-2.7.5.jar:2.7.5] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1306) ~[spring-boot-2.7.5.jar:2.7.5] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1295) ~ I created this project following the Intellij interface. The only change I made was downgrade spring boot version, from 3.0.1 to 2.7.5 -
Count of specific value and count of individual value
I have a table like: Id type status 1 coach 1.0 2 coach True 3 client Null 4 coach False 5 client Null 6 coach False 7 client Null 8 coach True 9 coach 1.0 10 client Null I want to create a column where coach status is active when value is 1.0 and True else status should be inaactive. Also want to add count of active and inactive values in another column and and total coach count as another column I tried this code Select id,(case when status in (1.0,True) then 'Active' else 'Inactive' end) as Status from table Where type ='Coach' But unable to add count of active and inactive and total coach -
MySQL tune slow delete
I'm using MySQL InnoDB, one of the most important tables has over 700 million records with totally 23 actively used indexes. I am trying to delete the records in a batch of 2000 based on the record date (and order by on primary key column). Each date has around 6 million records, I delete it date by date. Each batch takes around 25 seconds to complete. Since it is a Production database, I want this delete operation to complete faster. Is there a better way to do this? -
Pytorch Results Differ in Training eval() vs Testing/inference eval()
I am trying to replicate the results I get on my evaluation dataset in training with the same dataset in testing. For context, I am trying to denoise images and they are being passed a simple NN consisting of a few conv, relu and BN layers. My problem is that my results are quite close but there is always some disparity. For instance in training the model gets 37.49 dB and in testing it gets 37.40. I have done a bit of reading and it seems that this has to do with batchnorm in the model but setting track_running_stats to false hasnt helped as some people suggest. I also make sure to call model.eval() on my model and I use with torch.no_grad(): The only method I have found to bring the PSNR close is to test using a small minibatch (3 images) but this is not practical as the end goal is to process images seperately. I think this increase might also be due to the PSNR function itself processing larger batches at a time. -
Can GNU Octave handle dynamical plots for the GUI?
I tested this logging software for Arduino and it works great, for a while. The problem with this logging software is that the plot make the application freeze and I don't know why. I have been testing the software for a while and it seems that the plot is the cause of the problem. It freeze and then the whole application freeze. It does not matter even if I plot with regular plot, or if I call an object of the plot and update its X, Y vectors. https://github.com/DanielMartensson/GNU-Octave-Logger/blob/1c6001c115f402a6c3ec71c2ac95817666ea615b/export/wnd/GNU_Octave_Logger_thread.m#L98 % Focus on plot axis(wnd.plot); h = plot(rand(2)); [ code ] set(h, {'YData'}, {analogInPlot; analogOutPlot}); set(h, {'XData'}, {L; L}); legend(h, 'Analog in', 'Analog out'); So my question is: Even if I update the vectors, the plot can still freeze and then the whole application freeze as well. This is a combination with the GUI running. Can it be that GNU Octave cannot handle dynamical plots for GUI? If I remove the plot, then the application works fine. So it must be that GNU Octave cannot handle dynamical plots for the GUI? %set(h, {'YData'}, {analogInPlot; analogOutPlot}); %set(h, {'XData'}, {L; L}); %legend(h, 'Analog in', 'Analog out'); -
Javascript: How to reduce json when consecutive key is the same?
I have a json where I need to reduce the items when they are of a type (isSender). I need to reduce them when they are occuring consecutively. The data can change. The data looks like this: "messages": [ { "content": ["Foo"], "isSender": true }, { "content": ["Bar"], "isSender": true }, { "content": ["Lorem"], "isSender": true }, { "content": ["Ipsum"], "isSender": true }, { "content": ["Dolor"], "isSender": false }, { "content": ["Sit Amet"], "isSender": false }, { "content": ["No"], "isSender": true } ] I need the content to be an array of messages when the consecutive "isSender" key is the same. The output should look like this: "messages": [ { "content": ["Foo", "Bar", "Lorem", "Ipsum"], "isSender": true }, { "content": ["Dolor", "Sit amet"], "isSender": false }, { "content": ["No"], "isSender": true } ] So far I have tried looping through the message array and checking if next messages have the same "isSender" key. However this does not work for more than 2 messages. let deleteIndex = []; for(let i = 0 ; i < messages.length - 1; i++) { const currMs = messages[i]; const nextMs = messages[i + 1]; if (nextMs.isSender == currMs.isSender) { currMs.content.push(nextMs) deleteIndex.push(i + 1) // saving … -
Chromium video decoding issue
I work on a team that is developing a browser based video application. We have been experiencing an occasional severe degradation in the quality of our video starting around November 2022. It happens during a call and continues until the call is ended (the connection is ended), and appears to be a decoding issue (the screen looks pixelated/discolored but you retain some idea of the 'shape' of objects on the video feed). We use Pexip which acts as an MCU between participants, and have validated from Pexip's outbound packets that it is sending a clear video stream, however on the browser we see the issue nonetheless. This is only seen on a single participants video stream. The issue has only been seen on Chrome and Edge which default to VP8, and has not been reproduced on Firefox (defaults to H.264). This leads me to believe it is an issue with Chromium. I have struggled to find any logs that relate to the start of the issue, and am asking if you have any suggestions on where to look on Chrome's logs to find any indication that an issue has started or to understand more about why the issue is happening? … -
Pine Script capture 24hrs Change
I'm currently coding a Pinescript Strategy for my AutoTrading but I need just one more condition and that is the 24hrs change. Is it possible to meaure the change or price on 24hrs on Pinescript on any timefram? If now, Can it be like past 100 bars then calculate the data of change? It's my first question here and thank you very much for the future respondents! I would like to know If it's possible to have the 24hrs change in Pine -
Jenkins Email SMTP server errors
So having some issues in Jenkins while setting up E-Mail notification. Using these settings which work fine in code on my websites and azure pipelines smtp: tulip.specialservers.com port: 25 EnableSsl: false user: xxxxxx@xxxxx.com password: xxxxxxx As I say these cred's work elsewhere, howverever on Jenkins I have tried smtp: tulip.specialservers.com port: 25 EnableSsl: false user: xxxxxx@xxxxx.com password: xxxxxxx jakarta.mail.AuthenticationFailedException: 535 Invalid Username or Password ?? smtp: tulip.specialservers.com port: 465 EnableSsl: false user: xxxxxx@xxxxx.com password: xxxxxxx jakarta.mail.MessagingException: Got bad greeting from SMTP host: tulip.specialservers.com, port: 465, response: [EOF] ??? smtp: tulip.specialservers.com port: 25 EnableSsl: true user: xxxxxx@xxxxx.com password: xxxxxxx javax.net.ssl.SSLException: Unsupported or unrecognized SSL message ??? smtp: tulip.specialservers.com port: 465 EnableSsl: true user: xxxxxx@xxxxx.com password: xxxxxxx jakarta.mail.AuthenticationFailedException: 535 Invalid Username or Password ?? When I last set this up a couple of years ago there was no issue, can anyone provide me with some ideas please. Ta -
In R, how can I create a new column with values 1/0, where the value in the new column is 1 only if values in two other columns are both 1?
I have two columns within a DF, "wet" and "cold", with values of 1 and 0 respectively, e.g: Wet Cold 1 1 0 1 0 1 1 0 1 1 0 0 I would like to create a new column, wet&cold, where only if wet=1 and cold=1, then wet&cold=1. If any or both of them are 0 or not matching, then wet&cold=0. I tried to work around with grepl, but without success. -
No gradients provided for any variable:
I am getting this error in tensorflow while using gradient tape on this function. I have done everything but still not getting the output. `with tf.GradientTape() as tape: critic_value = -agent.critic(state_batch, old_actions) actor_loss = tf.math.reduce_mean(critic_value) actor_grad = tape.gradient(actor_loss, agent.actor.trainable_variables) agent.actor.optimizer.apply_gradients(zip(actor_grad, agent.actor.trainable_variables))` -
How to find the subimage in a large image using python cv2
Main image with sub-image I have a image with sub-images inside how to find the sub-images in a main image without template method and using python code or is there any software to find the sub-image. import cv2 # Load the image image = cv2.imread("img_2.jpg") # Define the callback function for the threshold slider def threshold_callback(threshold): # Perform Canny edge detection edges = cv2.Canny(image, threshold, threshold*2) # Display the edges cv2.imshow("Edges", edges) # Create a window to display the original image cv2.namedWindow("Original", cv2.WINDOW_NORMAL) cv2.imshow("Original", image) # Create a trackbar for adjusting the threshold cv2.createTrackbar("Threshold", "Original", 0, 255, threshold_callback) # Wait for the user to press a key cv2.waitKey(0) # Close all windows cv2.destroyAllWindows() i try this but it not working. -
How to make a timer reset
I just started teaching myself front end coding and was wondering how to make a timer that when ends makes a button go to a random position across the screen. Here's the code I wrote(stole). Here's the code I wrote(stole) but the timer doesn't restart when it ends. I need it to move the button after resetting the timer. var h3 = document.getElementsByTagName("h3"); h3[0].innerHTML = "Countdown Timer With JS"; var sec = 5, countDiv = document.getElementById("timer"), secpass, countDown = setInterval(function () { "use strict"; secpass(); }, 1000); function secpass() { "use strict"; var min = Math.floor(sec / 60), remSec = sec % 60; if (remSec < 10) { remSec = "0" + remSec; } if (min < 10) { min = "0" + min; } countDiv.innerHTML = min + ":" + remSec; if (sec > 0) { sec = sec - 1; } else { clearInterval(countDown); var b = document.getElementById("thing"); var i = Math.floor(Math.random() * 800) + 1; var j = Math.floor(Math.random() * 600) + 1; b.style.left = i + "px"; b.style.top = j + "px"; } }; -
How do I clear the view cache for a single file in express.js?
when i render my homepage like this: router.get('/', function(req, res, next) { res.render('../_cache/homepage-1.hbs', { title: 'Home', style: 'home-new', projectSlug: 'homepage', }); }); it seems to cache the way homepage-1.hbs was when the first started. if i then edit the file, it will still show the old one until i reboot the server. but only in production, it does not happen in development. How can i clear this cache? -
Getting <__main__.Email object at 0x000001C114A1BF10> error when trying to print emails [duplicate]
I am trying to simulate email message whereby the user can select either send, read, mark as spam or quit. When I run the code, I can send an email and quit however, when I select read or mark as spam, I am picking up the following email object errors: What would you like to do - read/mark spam/send/quit?read List of emails <main.Email object at 0x00000230D6036F10> <main.Email object at 0x00000230D64325D0> <main.Email object at 0x00000230D6432510> <main.Email object at 0x00000230D6432650> Enter number of email you want to read: 3 <main.Email object at 0x00000230D6432510> What would you like to do - read/mark spam/send/quit?mark spam List of emails <main.Email object at 0x00000230D6036F10> <main.Email object at 0x00000230D64325D0> <main.Email object at 0x00000230D6432510> <main.Email object at 0x00000230D6432650> This is my code: # Defining class for Email as per instructions on task class Email: # Creating functions for Email classs def __init__(self, email_contents, from_address): self.from_address = from_address self.is_spam = False self.has_been_read = False self.email_contents = email_contents def mark_as_read(self): self.has_been_read = True def mark_as_spam(self): self.is_spam = True # Creating list for emails inbox = [] # Creating method 'add_email' def add_email(contents, email_address): email = Email(contents, email_address) inbox.append(email) # Creating method 'get_count' def get_count(): return len(inbox) # Creating method 'get_email' … -
render a markdown file to pdf with rmarkdown::render() and adjust page margins and font
I'd like to render a simple markdown file, that has been created by another process before, into a pdf file. The command: rmarkdown::render(input = "my_report.md", output_format = rmarkdown::pdf_document(latex_engine = "xelatex")) just does this job. However I would like to change the margins and the main font. With an .Rmd file one would define these settings in the Yaml header like this: --- output: pdf_document: latex_engine: xelatex mainfont: LiberationSans geometry: "left=5cm,right=3cm,top=2cm,bottom=2cm" --- But the markdown files I'd like to convert don't have a Yaml header. Is there a way to pass these Yaml options to the render function as function parameters or in an indirect way? -
How to add multiple command with DOCKER to run FASTAPI & CRON job togather
I have a docker file that can run fast API and CRON jobs scheduler separately very well. But I want to run them together how can I do it? Folder Structure: Docker File FROM python:3.8 RUN apt-get update && apt-get -y install cron vim WORKDIR /opt/oracle RUN apt-get update && apt-get install -y libaio1 wget unzip \ && wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip \ && unzip instantclient-basiclite-linuxx64.zip \ && rm -f instantclient-basiclite-linuxx64.zip \ && cd /opt/oracle/instantclient* \ && rm -f *jdbc* *occi* *mysql* *README *jar uidrvci genezi adrci \ && echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf \ && ldconfig WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . COPY crontab /etc/cron.d/crontab COPY hello.py /app/hello.py RUN chmod 0644 /etc/cron.d/crontab RUN /usr/bin/crontab /etc/cron.d/crontab EXPOSE 8000 # run process of container CMD ["uvicorn", "main:app", "--reload", "--host", "0.0.0.0", "--port", "8000"] CMD ["cron", "-f"] -
How to add reply-to address to curl::send_mail?
To be able to receive replies on emails send through a relay, I would need to be able to specify a reply-to address. How can this be done with curl::send_mail? How can I add the Reply-To header? -
Keras loss value very high and not decreasing
Firstly, I know that similar questions have been asked before, but mainly for classification problems. Mine is a regression-style problem. I am trying to train a neural network using keras to evaluate chess positions using stockfish evaluations. The input is boards in a (12,8,8) array (representing piece placement for each individual piece) and output is the evaluation in pawns. When training, the loss stagnates at around 500,000-600,000. I have a little over 12 million boards + evaluations and I train on all the data at once. The loss function is MSE. This is my current code: model = Sequential() model.add(Dense(16, activation = "relu", input_shape = (12, 8, 8))) model.add(Dropout(0.2)) model.add(Dense(16, activation = "relu")) model.add(Dense(10, activation = "relu")) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(1, activation = "linear")) model.compile(optimizer = "adam", loss = "mean_squared_error", metrics = ["mse"]) model.summary() # model = load_model("model.h5") boards = np.load("boards.npy") evals = np.load("evals.npy") perf = model.fit(boards, evals, epochs = 10).history model.save("model.h5") plt.figure(dpi = 600) plt.title("Loss") plt.plot(perf["loss"]) plt.show() This is the output of a previous epoch: 145856/398997 [=========>....................] - ETA: 26:23 - loss: 593797.4375 - mse: 593797.4375 The loss will remain at 570,000-580,000 upon further fitting, which is not ideal. The loss should decrease by a few more orders of magnitude …