2023-01-17 13:46:22 +00:00
|
|
|
package pkg
|
|
|
|
|
|
|
|
import (
|
|
|
|
"context"
|
|
|
|
"log"
|
|
|
|
|
|
|
|
"github.com/jackc/pgx/v4"
|
|
|
|
"github.com/jackc/pgx/v4/pgxpool"
|
|
|
|
)
|
|
|
|
|
Implement login cookie, its verification, and logout
At first i thought that i would need to implement sessions, the ones
that keep small files onto the disk, to know which user is talking to
the server, but then i realized that, for now at least, i only need a
very large number, plus the email address, to be used as a lookup, and
that can be stored in the user table, in a separate schema.
Had to change login to avoid raising exceptions when login failed
because i now keep a record of login attemps, and functions are always
run in a single transaction, thus the exception would prevent me to
insert into login_attempt. Even if i use a separate procedure, i could
not keep the records.
I did not want to add a parameter to the logout function because i was
afraid that it could be called from separate users. I do not know
whether it is possible with the current approach, since the settings
variable is also set by the same applications; time will tell.
2023-01-17 19:48:50 +00:00
|
|
|
type Db struct {
|
Add a function to set request settings and the role
I did not like the idea that it was the Go server who should set values
such as request.user or set the role, because this is mostly something
that only the database wants for itself, such as when calling logout. I
am also planning to use these setings for row security with the user’s
id, that the Go application has no need for, but with the current
approach i would need to return it from check_cookie so that it can
return it back to the database when acquiring the connection.
I would have used the same function to set the settings and the role,
but security definer functions—obviously in retrospect—can not set the
role, because then could switch to any role of the user that defined the
function, not the roles they are member of. Thus, a new function.
I did not want to do that every time i needed the database connection
within the same request, because it would perform the same operations
each time—it is the same cookie, afterall—, so new connections are
request scoped and passed along in the context.
2023-01-19 12:07:32 +00:00
|
|
|
*pgxpool.Pool
|
Implement login cookie, its verification, and logout
At first i thought that i would need to implement sessions, the ones
that keep small files onto the disk, to know which user is talking to
the server, but then i realized that, for now at least, i only need a
very large number, plus the email address, to be used as a lookup, and
that can be stored in the user table, in a separate schema.
Had to change login to avoid raising exceptions when login failed
because i now keep a record of login attemps, and functions are always
run in a single transaction, thus the exception would prevent me to
insert into login_attempt. Even if i use a separate procedure, i could
not keep the records.
I did not want to add a parameter to the logout function because i was
afraid that it could be called from separate users. I do not know
whether it is possible with the current approach, since the settings
variable is also set by the same applications; time will tell.
2023-01-17 19:48:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
func NewDatabase(ctx context.Context, connString string) (*Db, error) {
|
2023-01-17 13:46:22 +00:00
|
|
|
config, err := pgxpool.ParseConfig(connString)
|
|
|
|
if err != nil {
|
|
|
|
log.Fatal(err)
|
|
|
|
}
|
|
|
|
|
|
|
|
config.AfterConnect = func(ctx context.Context, conn *pgx.Conn) error {
|
2023-02-20 10:42:21 +00:00
|
|
|
if _, err := conn.Exec(ctx, "SET search_path TO numerus, public"); err != nil {
|
|
|
|
return err
|
|
|
|
}
|
|
|
|
return registerPgTypes(ctx, conn)
|
2023-01-17 13:46:22 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
config.BeforeAcquire = func(ctx context.Context, conn *pgx.Conn) bool {
|
Add a function to set request settings and the role
I did not like the idea that it was the Go server who should set values
such as request.user or set the role, because this is mostly something
that only the database wants for itself, such as when calling logout. I
am also planning to use these setings for row security with the user’s
id, that the Go application has no need for, but with the current
approach i would need to return it from check_cookie so that it can
return it back to the database when acquiring the connection.
I would have used the same function to set the settings and the role,
but security definer functions—obviously in retrospect—can not set the
role, because then could switch to any role of the user that defined the
function, not the roles they are member of. Thus, a new function.
I did not want to do that every time i needed the database connection
within the same request, because it would perform the same operations
each time—it is the same cookie, afterall—, so new connections are
request scoped and passed along in the context.
2023-01-19 12:07:32 +00:00
|
|
|
cookie := ""
|
|
|
|
if value, ok := ctx.Value(ContextCookieKey).(string); ok {
|
|
|
|
cookie = value
|
2023-01-17 13:46:22 +00:00
|
|
|
}
|
Add a function to set request settings and the role
I did not like the idea that it was the Go server who should set values
such as request.user or set the role, because this is mostly something
that only the database wants for itself, such as when calling logout. I
am also planning to use these setings for row security with the user’s
id, that the Go application has no need for, but with the current
approach i would need to return it from check_cookie so that it can
return it back to the database when acquiring the connection.
I would have used the same function to set the settings and the role,
but security definer functions—obviously in retrospect—can not set the
role, because then could switch to any role of the user that defined the
function, not the roles they are member of. Thus, a new function.
I did not want to do that every time i needed the database connection
within the same request, because it would perform the same operations
each time—it is the same cookie, afterall—, so new connections are
request scoped and passed along in the context.
2023-01-19 12:07:32 +00:00
|
|
|
if _, err := conn.Exec(ctx, "select set_cookie($1)", cookie); err != nil {
|
2023-01-22 19:37:43 +00:00
|
|
|
log.Printf("ERROR - Failed to set role: %v", err)
|
|
|
|
return false
|
|
|
|
}
|
2023-01-17 13:46:22 +00:00
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
|
|
|
config.AfterRelease = func(conn *pgx.Conn) bool {
|
|
|
|
if _, err := conn.Exec(context.Background(), "RESET ROLE"); err != nil {
|
|
|
|
log.Printf("ERROR - Failed to reset role: %v", err)
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
|
Implement login cookie, its verification, and logout
At first i thought that i would need to implement sessions, the ones
that keep small files onto the disk, to know which user is talking to
the server, but then i realized that, for now at least, i only need a
very large number, plus the email address, to be used as a lookup, and
that can be stored in the user table, in a separate schema.
Had to change login to avoid raising exceptions when login failed
because i now keep a record of login attemps, and functions are always
run in a single transaction, thus the exception would prevent me to
insert into login_attempt. Even if i use a separate procedure, i could
not keep the records.
I did not want to add a parameter to the logout function because i was
afraid that it could be called from separate users. I do not know
whether it is possible with the current approach, since the settings
variable is also set by the same applications; time will tell.
2023-01-17 19:48:50 +00:00
|
|
|
pool, err := pgxpool.ConnectConfig(ctx, config)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
return &Db{pool}, nil
|
|
|
|
}
|
|
|
|
|
2023-02-14 11:46:11 +00:00
|
|
|
func notFoundErrorOrPanic(err error) bool {
|
|
|
|
if err == pgx.ErrNoRows {
|
|
|
|
return true
|
|
|
|
}
|
|
|
|
if err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
return false
|
|
|
|
}
|
|
|
|
|
Add a function to set request settings and the role
I did not like the idea that it was the Go server who should set values
such as request.user or set the role, because this is mostly something
that only the database wants for itself, such as when calling logout. I
am also planning to use these setings for row security with the user’s
id, that the Go application has no need for, but with the current
approach i would need to return it from check_cookie so that it can
return it back to the database when acquiring the connection.
I would have used the same function to set the settings and the role,
but security definer functions—obviously in retrospect—can not set the
role, because then could switch to any role of the user that defined the
function, not the roles they are member of. Thus, a new function.
I did not want to do that every time i needed the database connection
within the same request, because it would perform the same operations
each time—it is the same cookie, afterall—, so new connections are
request scoped and passed along in the context.
2023-01-19 12:07:32 +00:00
|
|
|
func (db *Db) Acquire(ctx context.Context) (*Conn, error) {
|
|
|
|
conn, err := db.Pool.Acquire(ctx)
|
|
|
|
if err != nil {
|
|
|
|
return nil, err
|
|
|
|
}
|
|
|
|
return &Conn{conn}, nil
|
|
|
|
}
|
|
|
|
|
2023-01-22 19:37:43 +00:00
|
|
|
func (db *Db) MustAcquire(ctx context.Context) *Conn {
|
|
|
|
conn, err := db.Acquire(ctx)
|
|
|
|
if err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
return conn
|
|
|
|
}
|
|
|
|
|
Add a function to set request settings and the role
I did not like the idea that it was the Go server who should set values
such as request.user or set the role, because this is mostly something
that only the database wants for itself, such as when calling logout. I
am also planning to use these setings for row security with the user’s
id, that the Go application has no need for, but with the current
approach i would need to return it from check_cookie so that it can
return it back to the database when acquiring the connection.
I would have used the same function to set the settings and the role,
but security definer functions—obviously in retrospect—can not set the
role, because then could switch to any role of the user that defined the
function, not the roles they are member of. Thus, a new function.
I did not want to do that every time i needed the database connection
within the same request, because it would perform the same operations
each time—it is the same cookie, afterall—, so new connections are
request scoped and passed along in the context.
2023-01-19 12:07:32 +00:00
|
|
|
type Conn struct {
|
|
|
|
*pgxpool.Conn
|
Implement login cookie, its verification, and logout
At first i thought that i would need to implement sessions, the ones
that keep small files onto the disk, to know which user is talking to
the server, but then i realized that, for now at least, i only need a
very large number, plus the email address, to be used as a lookup, and
that can be stored in the user table, in a separate schema.
Had to change login to avoid raising exceptions when login failed
because i now keep a record of login attemps, and functions are always
run in a single transaction, thus the exception would prevent me to
insert into login_attempt. Even if i use a separate procedure, i could
not keep the records.
I did not want to add a parameter to the logout function because i was
afraid that it could be called from separate users. I do not know
whether it is possible with the current approach, since the settings
variable is also set by the same applications; time will tell.
2023-01-17 19:48:50 +00:00
|
|
|
}
|
|
|
|
|
2023-02-08 12:47:36 +00:00
|
|
|
func (c *Conn) MustBegin(ctx context.Context) *Tx {
|
|
|
|
tx, err := c.Begin(ctx)
|
|
|
|
if err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
return &Tx{tx}
|
|
|
|
}
|
|
|
|
|
2023-01-22 19:37:43 +00:00
|
|
|
func (c *Conn) MustGetText(ctx context.Context, def string, sql string, args ...interface{}) string {
|
Implement login cookie, its verification, and logout
At first i thought that i would need to implement sessions, the ones
that keep small files onto the disk, to know which user is talking to
the server, but then i realized that, for now at least, i only need a
very large number, plus the email address, to be used as a lookup, and
that can be stored in the user table, in a separate schema.
Had to change login to avoid raising exceptions when login failed
because i now keep a record of login attemps, and functions are always
run in a single transaction, thus the exception would prevent me to
insert into login_attempt. Even if i use a separate procedure, i could
not keep the records.
I did not want to add a parameter to the logout function because i was
afraid that it could be called from separate users. I do not know
whether it is possible with the current approach, since the settings
variable is also set by the same applications; time will tell.
2023-01-17 19:48:50 +00:00
|
|
|
var result string
|
2023-02-14 11:46:11 +00:00
|
|
|
if notFoundErrorOrPanic(c.Conn.QueryRow(ctx, sql, args...).Scan(&result)) {
|
|
|
|
return def
|
Implement login cookie, its verification, and logout
At first i thought that i would need to implement sessions, the ones
that keep small files onto the disk, to know which user is talking to
the server, but then i realized that, for now at least, i only need a
very large number, plus the email address, to be used as a lookup, and
that can be stored in the user table, in a separate schema.
Had to change login to avoid raising exceptions when login failed
because i now keep a record of login attemps, and functions are always
run in a single transaction, thus the exception would prevent me to
insert into login_attempt. Even if i use a separate procedure, i could
not keep the records.
I did not want to add a parameter to the logout function because i was
afraid that it could be called from separate users. I do not know
whether it is possible with the current approach, since the settings
variable is also set by the same applications; time will tell.
2023-01-17 19:48:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
|
2023-02-14 11:39:54 +00:00
|
|
|
func (c *Conn) MustGetBool(ctx context.Context, sql string, args ...interface{}) bool {
|
|
|
|
var result bool
|
|
|
|
if err := c.Conn.QueryRow(ctx, sql, args...).Scan(&result); err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
|
2023-01-22 19:37:43 +00:00
|
|
|
func (c *Conn) MustExec(ctx context.Context, sql string, args ...interface{}) {
|
Add a function to set request settings and the role
I did not like the idea that it was the Go server who should set values
such as request.user or set the role, because this is mostly something
that only the database wants for itself, such as when calling logout. I
am also planning to use these setings for row security with the user’s
id, that the Go application has no need for, but with the current
approach i would need to return it from check_cookie so that it can
return it back to the database when acquiring the connection.
I would have used the same function to set the settings and the role,
but security definer functions—obviously in retrospect—can not set the
role, because then could switch to any role of the user that defined the
function, not the roles they are member of. Thus, a new function.
I did not want to do that every time i needed the database connection
within the same request, because it would perform the same operations
each time—it is the same cookie, afterall—, so new connections are
request scoped and passed along in the context.
2023-01-19 12:07:32 +00:00
|
|
|
if _, err := c.Conn.Exec(ctx, sql, args...); err != nil {
|
Implement login cookie, its verification, and logout
At first i thought that i would need to implement sessions, the ones
that keep small files onto the disk, to know which user is talking to
the server, but then i realized that, for now at least, i only need a
very large number, plus the email address, to be used as a lookup, and
that can be stored in the user table, in a separate schema.
Had to change login to avoid raising exceptions when login failed
because i now keep a record of login attemps, and functions are always
run in a single transaction, thus the exception would prevent me to
insert into login_attempt. Even if i use a separate procedure, i could
not keep the records.
I did not want to add a parameter to the logout function because i was
afraid that it could be called from separate users. I do not know
whether it is possible with the current approach, since the settings
variable is also set by the same applications; time will tell.
2023-01-17 19:48:50 +00:00
|
|
|
panic(err)
|
|
|
|
}
|
2023-01-17 13:46:22 +00:00
|
|
|
}
|
2023-02-08 12:47:36 +00:00
|
|
|
|
2023-02-12 20:06:48 +00:00
|
|
|
func (c *Conn) MustQuery(ctx context.Context, sql string, args ...interface{}) pgx.Rows {
|
|
|
|
rows, err := c.Conn.Query(ctx, sql, args...)
|
|
|
|
if err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
return rows
|
|
|
|
}
|
|
|
|
|
2023-02-08 12:47:36 +00:00
|
|
|
type Tx struct {
|
|
|
|
pgx.Tx
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tx *Tx) MustCommit(ctx context.Context) {
|
|
|
|
if err := tx.Commit(ctx); err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tx *Tx) MustRollback(ctx context.Context) {
|
|
|
|
if err := tx.Rollback(ctx); err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2023-06-09 10:43:50 +00:00
|
|
|
func (tx *Tx) MustExec(ctx context.Context, sql string, args ...interface{}) {
|
|
|
|
if _, err := tx.Exec(ctx, sql, args...); err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Allow importing contacts from Holded
This allows to import an Excel file exported from Holded, because it is
our own user case. When we have more customers, we will give out an
Excel template file to fill out.
Why XLSX files instead of CSV, for instance? First, because this is the
output from Holded, but even then we would have more trouble with CSV
than with XLSX because of Microsoft: they royally fucked up
interoperability when decided that CSV files, the files that only other
applications or programmers see, should be “localized”, and use a comma
or a **semicolon** to separate a **comma** separated file depending on
the locale’s decimal separator.
This is ridiculous because it means that CSV files created with an Excel
in USA uses comma while the same Excel but with a French locale expects
the fields to be separated by semicolon. And for no good reason,
either.
Since they fucked up so bad, decided to add a non-standard “meta” field
to specify the separator, writing a `sep=,` in the first line, but this
only works for reading, because saving the same file changes the
separator back to the locale-dependent character and removes the “meta”
field.
And since everyone expects to open spreadsheet with Excel, i can not
use CSV if i do not want a bunch of support tickets telling me that the
template is all in a single line.
I use an extremely old version of a xlsx reading library for golang[0]
because it is already available in Debian repositories, and the only
thing i want from it is to convert the convoluted XML file into a
string array.
Go is only responsible to read the file and dump its contents into a
temporary table, so that it can execute the PL/pgSQL function that will
actually move that data to the correct relations, much like add_contact
does but in batch.
In PostgreSQL version 16 they added a pg_input_is_valid function that
i would use to test whether input values really conform to domains,
but i will have to wait for Debian to pick up the new version.
Meanwhile, i use a couple of temporary functions, in lieu of nested
functions support in PostgreSQL.
Part of #45
[0]: https://github.com/tealeg/xlsx
2023-07-02 22:05:47 +00:00
|
|
|
func (tx *Tx) MustGetText(ctx context.Context, sql string, args ...interface{}) string {
|
|
|
|
var result string
|
|
|
|
if err := tx.QueryRow(ctx, sql, args...).Scan(&result); err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
|
2023-02-08 12:47:36 +00:00
|
|
|
func (tx *Tx) MustGetInteger(ctx context.Context, sql string, args ...interface{}) int {
|
|
|
|
var result int
|
|
|
|
if err := tx.QueryRow(ctx, sql, args...).Scan(&result); err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
|
|
|
|
func (tx *Tx) MustGetIntegerOrDefault(ctx context.Context, def int, sql string, args ...interface{}) int {
|
|
|
|
var result int
|
2023-02-14 11:46:11 +00:00
|
|
|
if notFoundErrorOrPanic(tx.QueryRow(ctx, sql, args...).Scan(&result)) {
|
|
|
|
return def
|
2023-02-08 12:47:36 +00:00
|
|
|
}
|
|
|
|
return result
|
|
|
|
}
|
|
|
|
|
Allow importing contacts from Holded
This allows to import an Excel file exported from Holded, because it is
our own user case. When we have more customers, we will give out an
Excel template file to fill out.
Why XLSX files instead of CSV, for instance? First, because this is the
output from Holded, but even then we would have more trouble with CSV
than with XLSX because of Microsoft: they royally fucked up
interoperability when decided that CSV files, the files that only other
applications or programmers see, should be “localized”, and use a comma
or a **semicolon** to separate a **comma** separated file depending on
the locale’s decimal separator.
This is ridiculous because it means that CSV files created with an Excel
in USA uses comma while the same Excel but with a French locale expects
the fields to be separated by semicolon. And for no good reason,
either.
Since they fucked up so bad, decided to add a non-standard “meta” field
to specify the separator, writing a `sep=,` in the first line, but this
only works for reading, because saving the same file changes the
separator back to the locale-dependent character and removes the “meta”
field.
And since everyone expects to open spreadsheet with Excel, i can not
use CSV if i do not want a bunch of support tickets telling me that the
template is all in a single line.
I use an extremely old version of a xlsx reading library for golang[0]
because it is already available in Debian repositories, and the only
thing i want from it is to convert the convoluted XML file into a
string array.
Go is only responsible to read the file and dump its contents into a
temporary table, so that it can execute the PL/pgSQL function that will
actually move that data to the correct relations, much like add_contact
does but in batch.
In PostgreSQL version 16 they added a pg_input_is_valid function that
i would use to test whether input values really conform to domains,
but i will have to wait for Debian to pick up the new version.
Meanwhile, i use a couple of temporary functions, in lieu of nested
functions support in PostgreSQL.
Part of #45
[0]: https://github.com/tealeg/xlsx
2023-07-02 22:05:47 +00:00
|
|
|
func (tx *Tx) MustCopyFrom(ctx context.Context, tableName string, columns []string, length int, next func(int) ([]interface{}, error)) int64 {
|
|
|
|
copied, err := tx.CopyFrom(ctx, pgx.Identifier{tableName}, columns, pgx.CopyFromSlice(length, next))
|
2023-02-08 12:47:36 +00:00
|
|
|
if err != nil {
|
|
|
|
panic(err)
|
|
|
|
}
|
|
|
|
return copied
|
|
|
|
}
|