Requested by Clara, that she wanted to know that date for internal
processes. We agreed on adding only the most recent payment/collection
date, instead of adding all of them, for multiple payments/collections,
and she can know whether that date is for a partial or a complete
payment/collection with the status column.
I remove the related taxes and attachments, but keep related payments
because i believe it is not likely that deleting a paid expense is what
the user wants. If the user wants to do so, she can delete the payments
first.
Part of #84
This is only for user-visible strings; the name from the point of view
of code and database remains the same.
This is an attempt to force a distinction between payment method, used
in invoices, and payment accounts, for payments.
Closes#100.
I do not particularly enjoy an htmx-only way of doing that, because it
means that it can only work with JavaScript, but i think this is already
a lost cause, unfortunately. If i have time, i will try to make the
HTML-only form work too.
In this case, i have to put back the same row when updating or
cancelling the form, which is inside index.html. Instead of moving that
part to a separate file, i tried to define a block as a “template
fragment” and try to render that part only. Surprisingly, it works;
i am happy.
Closes#74.
I use HTTP 422 to signal that a form was submitted with bad data,
which i believe is the correct status code: “indicates that the server
understands the content type of the request content […], and the syntax
of the request content is correct, but it was unable to process the
contained instructions.”[0]
htmx, however, treats all 4xx status codes as error and, by default,
does not swap the target with the response’s content. Until i found out
that i could change that behaviour, i worked around this limitation by
returning HTTP 200 for htmx requests, but it is a waste of time given
that htmx _can_ accept HTTP 422 as a non-error.
[0]: https://www.rfc-editor.org/rfc/rfc9110#name-422-unprocessable-content
For the same reasons as with expenses[0], users are no longer expected
to manually set invoice status, and is now linked to their collections.
In this case, however, we had to remove the ‘sent’ and ‘unpaid’ status
options, because these _should_ only be set manually, as there is no
way for the application to know when to set them. Thus, there could
be inconsistencies, like invoices set to ‘unpaid’ when they actually
have collections, or invoices that were ‘sent’, then transitioned to
‘partial’/‘paid’ due to a collection, but then reset to ‘created’ if the
collection was deleted.
[0]: ac0143b2b0
This is mostly the same subsection as payments is for expense, added in
4f646e35d. In this case i call it “collections”, but it is actually
the same payments section.
This is the same as a payment, but the user is the payee instead of the
payer.
I used a different relation than payment because i do not know any other
way to encode the constraint that only invoices can have a collection,
while expenses have only payments.
Besides the name and the fact that they are related to invoices, a
collection is pretty much the same as a payment.
With Oriol we agreed that to add new payments to expenses we should
direct users to a separate payments section, much like the general
payments but centered around the payments of the given expense.
In fact, the only thing i had to do is extract the expense from the
URL, and then adjust the base URI to keep things always within the
correct section; the rest of the code is shared with the general
section.
I was repeating myself a lot for this use case, because each one needed
a different URL and SQL query, however they were kind of structurally
similar and could be refactored into common functions.
I actually did not forget them, and i did not add them on purpose,
mistakenly believing that PostgreSQL’s row-level policies would project
only rows from the current company. That is actually how Camper works,
but that’s because we use the request’s domain name to select the
company; here we use the path, and the row-level policy would return
rows from all companies the user belongs to.
I needed to place the payment accounts section somewhere, and the most
logical place seemed to be that dialog, where users can set up company
parameters.
However, that dialog was already saturated with related, but ultimately
independent forms, and adding the account section would make things
even worse, specially given that we need to be able to edit those
accounts in a separate page.
We agreed to separate that dialog into tabs, which means separate pages.
When i had everything in a separated page, then i did not know how to
actually share the code for the tabs, and decided that, for now, these
“tabs” would be items from the profile menu. Same function, different
presentation.
Users are no longer expected to manually set the status of an expense
and, instead, have to add payments to such expense to mark it as partial
or paid.
That means that the PL/pgSQL functions must not accept a status
parameter, the edit and new forms should no longer have a field for
the status, and that the expense list should no longer have the “quick
edit” for their status. That’s why it no longer should have a pointer
cursor, unlike invoice or quote status.
I am using an htmx-infused button to remove the payment, but that
button can not have the CSRF token as value, thus i have to send it in a
header.
The removal of payments warrants a functions, instead of just DELETE
(and CASCADE) as i do for payment methods, because i have to adjust the
status of expenses too. Since i already have functions for everything,
it is not worth using triggers just for that.
This actually should be the “payments and receivables” section, however
this is quite a mouthful; a “receivable” is a payment made **to** you,
therefore “payments” is ok.
In fact, there is still no receivables in there, as they should be in
a separate relation, to constraint them to invoices instead of expenses.
It will be done in a separate commit.
Since this section will be, in a sense, sort of simplified accounting,
i needed to introduce the “payment account” concept. There is no way,
yet, for users to add them, because i have to revamp the “tax details”
section, but this commit started to grow too big already.
The same reasoning for the attachment payment slips as PDF to payment:
something i have to add, but not yet in this commit.
In the HTML tables i only compute the aggregated amount by tax class
(e.g., IVA, IRPF), but here we need the actual tax (e.g., IVA 4 %)
because this spreadsheet is intended for accountants.
I can easily extract the amounts from invoice_tax_amount and
expense_tax_amount, but i also need to add the columns to the
spreadsheet, and always with the same order—does not matter much which,
only the same—, that’s why i had to sort the tax IDs when exporting, as
Go does not guarantee an order for maps.
Closes#92
This is to help up “sell” the service: people can look around the demo
to see whether it fits them. Of course, everyone should have the same
username in the demo.
We talked about having the username and password displayed above the
form in the template, but i think it makes more sense to give users as
little work as necessary. Plus, that means i do not have to write them
down while developing.
Whether the database is demo or not is not something that directly
depends on the environment, but rather on which database we are
connected to, thus an environment variable would not make much sense—it
has to be something of the database.
PostgreSQL has no PRAGMA application_id or PRAGMA user_version as with
SQLite to include application-specific values to the database. The
equivalent would be customized options[0], intended for modules
configuration, but that would require me to execute an ALTER DATABASE
in demo.sql with an specific datbase name, or force the use of psql to
run script the script, because then i can use the :DBNAME placeholder.
I guess that the most “standard” way is to just create a function that
returns a know value if the database is demo. Sqitch does not add that
function, therefore it is unlikely to be there by change unless it is
the demo database.
https://www.postgresql.org/docs/15/runtime-config-custom.html
The legal stuff. Required by Spanish law when setting up a site intended
for pecuniary gain, directly or indirectly.
Now we have more pages to the “public web”, and moved the header and
footer from home to the common layout. I also took the opportunity to
change the element from <div> to the appropriate element based on their
use (i.e., <header> and <footer>).
I removed the <div> around the logo because i did not see any use for
it. I may be from a previous design iteration, but it had no style
applied nor any usage at all in JavaScript.
This is mostly to reassure people that we are running the same version
as published on numerus.cat. Or at least, try.
Go 1.18 adds the info from git if the package is build from a git
repository, but this is not the case in OBS, so i instead relay on a
constant for the version number. This constant is “updated” by Debian’s
rules, mostly due to the discussion in [0].
[0]: https://github.com/golang/go/issues/22706
This is mainly to be able to stylize them using CSS; the current style
i set i just a placeholder to check that it works as expected.
Most of these links needs to check for the URI’s prefix, because they
are links to a whole section, but the first link must check for the
exact match, otherwise it would match every other URI, as all of them
start with /company/{uuid}.
The server does not return the markup for the top navigation when usin
HTMx, though, hence i have to change the current class using JavaScript.
I am not sure if the correct value for aria-current is “page” when the
link is not for the actual page the user is currently in, like when is
in the new quote page, but it seems to be the most appropriate value
from the enumeration given in the specifications, except, perhaps, for
the “location” value, but i was unable to find any example of that value
anywhere.
Part of #89.
This is for users that belong to more than one company. It is just a
page with links to the home of each company that the user belongs to.
Had to add a second company to the demo data to test it properly, even
though i already have unit tests for multicompany, but, you know….
It makes no sense to retrieve the same OIDs each and every connection,
because they are not going to change unless the database is reset,
something it is very unlikely to happen in production.
Thus, it is best to query them the first time the application connects
to the database, that it is done at startup to query the available
languages, and then reuse the OIDs.
I can get away of using an “unprotected” map, instead of sync.Map or a
map in tandem with sync.RWMutex, because the application establishes a
connection at startup from a single goroutine, and it registers _all_
types we will need to register within the application’s lifespan, hence
it there will be no more writes to that map once the web server is
listening for incoming connections.
This is risky, however, and i hope i do not have to regret it.
When adding “free-form products” to quotes they do not have a product
ID, but i has coalescing the NULL to zero because product_id is an
integer and can not coalesce a nullable integer to an empty string.
However, that causes problems when trying to create the invoice for that
quote, because it tries to add products that have an ID of 0 and the
foreign key, obviously, fail.
At first i modified NewInvoiceProductArray.EncodeBinary to check for
"0" as well as the empty string, but i realized this was wrong: the
problem was because i gave these products an ID when they do not have
any. And the solution is to cast product_id to a text, which is what
will get converted anyway because i the only thing i do to it is to
store to a string-backed InputForm field.
Closes#73.
This was requested by a potential user, as they want to be able to do
whatever they want to do to these lists with a spreadsheet.
In fact, they requested to be able to export to CSV, but, as always,
using CSV is a minefield because of Microsoft: since their Excel product
is fucking unable to write and read CSV from different locales, even if
using the same exact Excel product, i can not also create a CSV file
that is guaranteed to work on all locales. If i used the non-standard
sep=; thing to tell Excel that it is a fucking stupid application, then
proper applications would show that line as a row, which is the correct
albeit undesirable behaviour.
The solution is to use a spreadsheet file format that does not have this
issue. As far as I know, by default Excel is able to read XLSX and ODS
files, but i refuse to use the artificially complex, not the actually
used in Excel, and lobbied standard that Microsoft somehow convinced ISO
to publish, as i am using a different format because of the mess they
made, and i do not want to bend over in front of them, so ODS it is.
ODS is neither an elegant or good format by any means, but at least i
can write them using simple strings, because there is no ODS library
in Debian and i am not going to write yet another DEB package for an
overengineered package to write a simple table—all i want is to say
“here are these n columns, and these m columns; have a good day!”.
Part of #51.
Since most of PL/pgSQL functions accept a `uuid` domain, we get an error
if the value is not valid, forcing us to return an HTTP 500, as we
can not detect that the error was due to that.
Instead, i now validate that the slug is indeed a valid UUID before
attempting to send it to the database, returning the correct HTTP error
code and avoiding useless calls to the database.
I based the validation function of Parse() from Google’s uuid package[0]
because this function is an order or magnitude faster in benchmarks:
goos: linux
goarch: amd64
pkg: dev.tandem.ws/tandem/numerus/pkg
cpu: Intel(R) Core(TM) i5-6200U CPU @ 2.30GHz
BenchmarkValidUuid-4 36946050 29.37 ns/op
BenchmarkValidUuid_Re-4 3633169 306.70 ns/op
The regular expression used for the benchmark was:
var re = regexp.MustCompile("^[a-fA-F0-9]{8}-[a-fA-F0-9]{4}-4[a-fA-F0-9]{3}-[8|9|aA|bB][a-fA-F0-9]{3}-[a-fA-F0-9]{12}$")
And the input parameter for both functions was the following valid UUID,
because most of the time the passed UUID will be valid:
"f47ac10b-58cc-0372-8567-0e02b2c3d479"
I did not use the uuid package, even though it is in Debian’s
repository, because i only need to check whether the value is valid,
not convert it to a byte array. As far as i know, that package can not
do that.
[0]: https://github.com/google/uuid
I want this button, as well as the submit button, to be on a row below
the filters’ input, especially for quotes and invoices, that have the
most filters and looks weird with the button wedged in. Thus, i added
a <fieldset> around all the filters.
Closes#69
This works mostly like invoices: i have to “update” the expense form
to compute its total based on the subtotal and the selected taxes,
although in this case i do no need to compute the subtotal because that
is given by the user.
Nevertheless, i added a new function to compute that total because it
was already hairy enough for the dashboard, that also needs to compute
the tota, not just the base, and i wanted to test that function.
There is no need for a custom input type for that function as it only
needs a couple of simple domains. I have created the output type,
though, because otherwise i would need to have records or “reuse” any
other “amount” output type, which would be confusing.\
Part of #68.
Works exactly the same as for expenses, and this is sometimes convenient
for keeping transfer slips from customers and such.
I actually did not know where to add the download from this attachment,
because if add a column to the index it can easily be confused with the
download icon for the actual invoice.
Part of #66.
We only want two statuses for expense: not yet paid (pending), and paid.
Thus, it is a bit different from quotes and invoices, because expenses
do not pass throw the “workflow” of created→sent→{pending,paid}. That’s
way in this case the status field is already in the new expense form,
instead of hidden, and by pending is not equivalent to created but
unpaid (i.e., the same status color).
With the new select field in the form, the file field no longer can
span two columns or it would be alone on the next row.
Closes#67.
This was requested by Oriol; there are no other technical or legal
requirements for this.
I can not simply append the customer name to the file because it could
have characters that are not valid in file name depending on the
operating system, so i have to “slugify” it.
Closes#65
There was no explicit `order by` in the queries that list the products
of quotes and invoices, so PostgreSQL was free to use any order it
wanted. In this case, since was am grouping first by name, the result
was sorted by product name.
This is not an issue in most cases, albeit a bit rude to the user,
except for when the products *have* to in the same order the user
entered them, because they are monthly fees or something like that, that
must be ordered by month _number_, not by their _name_; the user will
usually input them in the correct order they want them on the invoice or
quote.
Sorting by *_product_id does *not* guarantee that they will always be
in insertion order, because the sequence can “wrap”, but i think i am
going to have bigger problems at that point.
Closes#63
When i wrote the functions to import contact, i already created a couple
of “temporary” functions to validate whether the input given from the
Excel files was correct according to the various domains used in the
relations, so i can know whether i can import that data.
I realized that i could do exactly the same when validating forms: check
that the value conforms to the domain, in the exact same way, so i can
make sure that the value will be accepted without duplicating the logic,
at the expense of a call to the database.
In an ideal world, i would use pg_input_is_valid, but this function is
only available in PostgreSQL 16 and Debian 12 uses PostgreSQL 15.
These functions are in the public schema because initially i wanted to
use them to also validate email, which is needed in the login form, but
then i recanted and kept the same email validation in Go, because
something felt off about using the database for that particular form,
but i do not know why.
This allows to import an Excel file exported from Holded, because it is
our own user case. When we have more customers, we will give out an
Excel template file to fill out.
Why XLSX files instead of CSV, for instance? First, because this is the
output from Holded, but even then we would have more trouble with CSV
than with XLSX because of Microsoft: they royally fucked up
interoperability when decided that CSV files, the files that only other
applications or programmers see, should be “localized”, and use a comma
or a **semicolon** to separate a **comma** separated file depending on
the locale’s decimal separator.
This is ridiculous because it means that CSV files created with an Excel
in USA uses comma while the same Excel but with a French locale expects
the fields to be separated by semicolon. And for no good reason,
either.
Since they fucked up so bad, decided to add a non-standard “meta” field
to specify the separator, writing a `sep=,` in the first line, but this
only works for reading, because saving the same file changes the
separator back to the locale-dependent character and removes the “meta”
field.
And since everyone expects to open spreadsheet with Excel, i can not
use CSV if i do not want a bunch of support tickets telling me that the
template is all in a single line.
I use an extremely old version of a xlsx reading library for golang[0]
because it is already available in Debian repositories, and the only
thing i want from it is to convert the convoluted XML file into a
string array.
Go is only responsible to read the file and dump its contents into a
temporary table, so that it can execute the PL/pgSQL function that will
actually move that data to the correct relations, much like add_contact
does but in batch.
In PostgreSQL version 16 they added a pg_input_is_valid function that
i would use to test whether input values really conform to domains,
but i will have to wait for Debian to pick up the new version.
Meanwhile, i use a couple of temporary functions, in lieu of nested
functions support in PostgreSQL.
Part of #45
[0]: https://github.com/tealeg/xlsx