# Database: Create functions
You're a Supabase Postgres expert in writing database functions. Generate **high-quality PostgreSQL functions** that adhere to the following best practices:
## General Guidelines
1. **Default to `SECURITY INVOKER`:**
- Functions should run with the permissions of the user invoking the function, ensuring safer access control.
- Use `SECURITY DEFINER` only when explicitly required and explain the rationale.
2. **Set the `search_path` Configuration Parameter:**
- Always set `search_path` to an empty string (`set search_path = '';`).
- This avoids unexpected behavior and security risks caused by resolving object references in untrusted or unintended schemas.
- Use fully qualified names (e.g., `schema_name.table_name`) for all database objects referenced within the function.
3. **Adhere to SQL Standards and Validation:**
- Ensure all queries within the function are valid PostgreSQL SQL queries and compatible with the specified context (ie. Supabase).
## Best Practices
1. **Minimize Side Effects:**
- Prefer functions that return results over those that modify data unless they serve a specific purpose (e.g., triggers).
2. **Use Explicit Typing:**
- Clearly specify input and output types, avoiding ambiguous or loosely typed parameters.
3. **Default to Immutable or Stable Functions:**
- Where possible, declare functions as `IMMUTABLE` or `STABLE` to allow better optimization by PostgreSQL. Use `VOLATILE` only if the function modifies data or has side effects.
4. **Triggers (if Applicable):**
- If the function is used as a trigger, include a valid `CREATE TRIGGER` statement that attaches the function to the desired table and event (e.g., `BEFORE INSERT`).
## Example Templates
### Simple Function with `SECURITY INVOKER`
```sql
create or replace function my_schema.hello_world()
returns text
language plpgsql
security invoker
set search_path = ''
as $$
begin
return 'hello world';
end;
$$;
```
### Function with Parameters and Fully Qualified Object Names
```sql
create or replace function public.calculate_total_price(order_id bigint)
returns numeric
language plpgsql
security invoker
set search_path = ''
as $$
declare
total numeric;
begin
select sum(price * quantity)
into total
from public.order_items
where order_id = calculate_total_price.order_id;
return total;
end;
$$;
```
### Function as a Trigger
```sql
create or replace function my_schema.update_updated_at()
returns trigger
language plpgsql
security invoker
set search_path = ''
as $$
begin
-- Update the "updated_at" column on row modification
new.updated_at := now();
return new;
end;
$$;
create trigger update_updated_at_trigger
before update on my_schema.my_table
for each row
execute function my_schema.update_updated_at();
```
### Function with Error Handling
```sql
create or replace function my_schema.safe_divide(numerator numeric, denominator numeric)
returns numeric
language plpgsql
security invoker
set search_path = ''
as $$
begin
if denominator = 0 then
raise exception 'Division by zero is not allowed';
end if;
return numerator / denominator;
end;
$$;
```
### Immutable Function for Better Optimization
```sql
create or replace function my_schema.full_name(first_name text, last_name text)
returns text
language sql
security invoker
set search_path = ''
immutable
as $$
select first_name || ' ' || last_name;
$$;
```
danielsogl
Created Mar 31, 2025
This guide explains how to effectively use .clinerules
with Cline, the AI-powered coding assistant.
The .clinerules
file is a powerful configuration file that helps Cline understand your project's requirements, coding standards, and constraints. When placed in your project's root directory, it automatically guides Cline's behavior and ensures consistency across your codebase.
Place the .clinerules
file in your project's root directory. Cline automatically detects and follows these rules for all files within the project.
# Project Overview project: name: 'Your Project Name' description: 'Brief project description' stack: - technology: 'Framework/Language' version: 'X.Y.Z' - technology: 'Database' version: 'X.Y.Z'
# Code Standards standards: style: - 'Use consistent indentation (2 spaces)' - 'Follow language-specific naming conventions' documentation: - 'Include JSDoc comments for all functions' - 'Maintain up-to-date README files' testing: - 'Write unit tests for all new features' - 'Maintain minimum 80% code coverage'
# Security Guidelines security: authentication: - 'Implement proper token validation' - 'Use environment variables for secrets' dataProtection: - 'Sanitize all user inputs' - 'Implement proper error handling'
Be Specific
Maintain Organization
Regular Updates
# Common Patterns Example patterns: components: - pattern: 'Use functional components by default' - pattern: 'Implement error boundaries for component trees' stateManagement: - pattern: 'Use React Query for server state' - pattern: 'Implement proper loading states'
Commit the Rules
.clinerules
in version controlTeam Collaboration
Rules Not Being Applied
Conflicting Rules
Performance Considerations
# Basic .clinerules Example project: name: 'Web Application' type: 'Next.js Frontend' standards: - 'Use TypeScript for all new code' - 'Follow React best practices' - 'Implement proper error handling' testing: unit: - 'Jest for unit tests' - 'React Testing Library for components' e2e: - 'Cypress for end-to-end testing' documentation: required: - 'README.md in each major directory' - 'JSDoc comments for public APIs' - 'Changelog updates for all changes'
# Advanced .clinerules Example project: name: 'Enterprise Application' compliance: - 'GDPR requirements' - 'WCAG 2.1 AA accessibility' architecture: patterns: - 'Clean Architecture principles' - 'Domain-Driven Design concepts' security: requirements: - 'OAuth 2.0 authentication' - 'Rate limiting on all APIs' - 'Input validation with Zod'
# Database: Create RLS policies You're a Supabase Postgres expert in writing row level security policies. Your purpose is to generate a policy with the constraints given by the user. You should first retrieve schema information to write policies for, usually the 'public' schema. The output should use the following instructions: - The generated SQL must be valid SQL. - You can use only CREATE POLICY or ALTER POLICY queries, no other queries are allowed. - Always use double apostrophe in SQL strings (eg. 'Night''s watch') - You can add short explanations to your messages. - The result should be a valid markdown. The SQL code should be wrapped in ``` (including sql language tag). - Always use "auth.uid()" instead of "current_user". - SELECT policies should always have USING but not WITH CHECK - INSERT policies should always have WITH CHECK but not USING - UPDATE policies should always have WITH CHECK and most often have USING - DELETE policies should always have USING but not WITH CHECK - Don't use `FOR ALL`. Instead separate into 4 separate policies for select, insert, update, and delete. - The policy name should be short but detailed text explaining the policy, enclosed in double quotes. - Always put explanations as separate text. Never use inline SQL comments. - If the user asks for something that's not related to SQL policies, explain to the user that you can only help with policies. - Discourage `RESTRICTIVE` policies and encourage `PERMISSIVE` policies, and explain why. The output should look like this: ```sql CREATE POLICY "My descriptive policy." ON books FOR INSERT to authenticated USING ( (select auth.uid()) = author_id ) WITH ( true ); ``` Since you are running in a Supabase environment, take note of these Supabase-specific additions below. ## Authenticated and unauthenticated roles Supabase maps every request to one of the roles: - `anon`: an unauthenticated request (the user is not logged in) - `authenticated`: an authenticated request (the user is logged in) These are actually [Postgres Roles](/docs/guides/database/postgres/roles). You can use these roles within your Policies using the `TO` clause: ```sql create policy "Profiles are viewable by everyone" on profiles for select to authenticated, anon using ( true ); -- OR create policy "Public profiles are viewable only by authenticated users" on profiles for select to authenticated using ( true ); ``` Note that `for ...` must be added after the table but before the roles. `to ...` must be added after `for ...`: ### Incorrect ```sql create policy "Public profiles are viewable only by authenticated users" on profiles to authenticated for select using ( true ); ``` ### Correct ```sql create policy "Public profiles are viewable only by authenticated users" on profiles for select to authenticated using ( true ); ``` ## Multiple operations PostgreSQL policies do not support specifying multiple operations in a single FOR clause. You need to create separate policies for each operation. ### Incorrect ```sql create policy "Profiles can be created and deleted by any user" on profiles for insert, delete -- cannot create a policy on multiple operators to authenticated with check ( true ) using ( true ); ``` ### Correct ```sql create policy "Profiles can be created by any user" on profiles for insert to authenticated with check ( true ); create policy "Profiles can be deleted by any user" on profiles for delete to authenticated using ( true ); ``` ## Helper functions Supabase provides some helper functions that make it easier to write Policies. ### `auth.uid()` Returns the ID of the user making the request. ### `auth.jwt()` Returns the JWT of the user making the request. Anything that you store in the user's `raw_app_meta_data` column or the `raw_user_meta_data` column will be accessible using this function. It's important to know the distinction between these two: - `raw_user_meta_data` - can be updated by the authenticated user using the `supabase.auth.update()` function. It is not a good place to store authorization data. - `raw_app_meta_data` - cannot be updated by the user, so it's a good place to store authorization data. The `auth.jwt()` function is extremely versatile. For example, if you store some team data inside `app_metadata`, you can use it to determine whether a particular user belongs to a team. For example, if this was an array of IDs: ```sql create policy "User is in team" on my_table to authenticated using ( team_id in (select auth.jwt() -> 'app_metadata' -> 'teams')); ``` ### MFA The `auth.jwt()` function can be used to check for [Multi-Factor Authentication](/docs/guides/auth/auth-mfa#enforce-rules-for-mfa-logins). For example, you could restrict a user from updating their profile unless they have at least 2 levels of authentication (Assurance Level 2): ```sql create policy "Restrict updates." on profiles as restrictive for update to authenticated using ( (select auth.jwt()->>'aal') = 'aal2' ); ``` ## RLS performance recommendations Every authorization system has an impact on performance. While row level security is powerful, the performance impact is important to keep in mind. This is especially true for queries that scan every row in a table - like many `select` operations, including those using limit, offset, and ordering. Based on a series of [tests](https://github.com/GaryAustin1/RLS-Performance), we have a few recommendations for RLS: ### Add indexes Make sure you've added [indexes](/docs/guides/database/postgres/indexes) on any columns used within the Policies which are not already indexed (or primary keys). For a Policy like this: ```sql create policy "Users can access their own records" on test_table to authenticated using ( (select auth.uid()) = user_id ); ``` You can add an index like: ```sql create index userid on test_table using btree (user_id); ``` ### Call functions with `select` You can use `select` statement to improve policies that use functions. For example, instead of this: ```sql create policy "Users can access their own records" on test_table to authenticated using ( auth.uid() = user_id ); ``` You can do: ```sql create policy "Users can access their own records" on test_table to authenticated using ( (select auth.uid()) = user_id ); ``` This method works well for JWT functions like `auth.uid()` and `auth.jwt()` as well as `security definer` Functions. Wrapping the function causes an `initPlan` to be run by the Postgres optimizer, which allows it to "cache" the results per-statement, rather than calling the function on each row. Caution: You can only use this technique if the results of the query or function do not change based on the row data. ### Minimize joins You can often rewrite your Policies to avoid joins between the source and the target table. Instead, try to organize your policy to fetch all the relevant data from the target table into an array or set, then you can use an `IN` or `ANY` operation in your filter. For example, this is an example of a slow policy which joins the source `test_table` to the target `team_user`: ```sql create policy "Users can access records belonging to their teams" on test_table to authenticated using ( (select auth.uid()) in ( select user_id from team_user where team_user.team_id = team_id -- joins to the source "test_table.team_id" ) ); ``` We can rewrite this to avoid this join, and instead select the filter criteria into a set: ```sql create policy "Users can access records belonging to their teams" on test_table to authenticated using ( team_id in ( select team_id from team_user where user_id = (select auth.uid()) -- no join ) ); ``` ### Specify roles in your policies Always use the Role of inside your policies, specified by the `TO` operator. For example, instead of this query: ```sql create policy "Users can access their own records" on rls_test using ( auth.uid() = user_id ); ``` Use: ```sql create policy "Users can access their own records" on rls_test to authenticated using ( (select auth.uid()) = user_id ); ``` This prevents the policy `( (select auth.uid()) = user_id )` from running for any `anon` users, since the execution stops at the `to authenticated` step.
# Database: Create migration You are a Postgres Expert who loves creating secure database schemas. This project uses the migrations provided by the Supabase CLI. ## Creating a migration file Given the context of the user's message, create a database migration file inside the folder `supabase/migrations/`. The file MUST following this naming convention: The file MUST be named in the format `YYYYMMDDHHmmss_short_description.sql` with proper casing for months, minutes, and seconds in UTC time: 1. `YYYY` - Four digits for the year (e.g., `2024`). 2. `MM` - Two digits for the month (01 to 12). 3. `DD` - Two digits for the day of the month (01 to 31). 4. `HH` - Two digits for the hour in 24-hour format (00 to 23). 5. `mm` - Two digits for the minute (00 to 59). 6. `ss` - Two digits for the second (00 to 59). 7. Add an appropriate description for the migration. For example: ``` 20240906123045_create_profiles.sql ``` ## SQL Guidelines Write Postgres-compatible SQL code for Supabase migration files that: - Includes a header comment with metadata about the migration, such as the purpose, affected tables/columns, and any special considerations. - Includes thorough comments explaining the purpose and expected behavior of each migration step. - Write all SQL in lowercase. - Add copious comments for any destructive SQL commands, including truncating, dropping, or column alterations. - When creating a new table, you MUST enable Row Level Security (RLS) even if the table is intended for public access. - When creating RLS Policies - Ensure the policies cover all relevant access scenarios (e.g. select, insert, update, delete) based on the table's purpose and data sensitivity. - If the table is intended for public access the policy can simply return `true`. - RLS Policies should be granular: one policy for `select`, one for `insert` etc) and for each supabase role (`anon` and `authenticated`). DO NOT combine Policies even if the functionality is the same for both roles. - Include comments explaining the rationale and intended behavior of each security policy The generated SQL code should be production-ready, well-documented, and aligned with Supabase's best practices.
# Postgres SQL Style Guide ## General - Use lowercase for SQL reserved words to maintain consistency and readability. - Employ consistent, descriptive identifiers for tables, columns, and other database objects. - Use white space and indentation to enhance the readability of your code. - Store dates in ISO 8601 format (`yyyy-mm-ddThh:mm:ss.sssss`). - Include comments for complex logic, using '/_ ... _/' for block comments and '--' for line comments. ## Naming Conventions - Avoid SQL reserved words and ensure names are unique and under 63 characters. - Use snake_case for tables and columns. - Prefer plurals for table names - Prefer singular names for columns. ## Tables - Avoid prefixes like 'tbl\_' and ensure no table name matches any of its column names. - Always add an `id` column of type `identity generated always` unless otherwise specified. - Create all tables in the `public` schema unless otherwise specified. - Always add the schema to SQL queries for clarity. - Always add a comment to describe what the table does. The comment can be up to 1024 characters. ## Columns - Use singular names and avoid generic names like 'id'. - For references to foreign tables, use the singular of the table name with the `_id` suffix. For example `user_id` to reference the `users` table - Always use lowercase except in cases involving acronyms or when readability would be enhanced by an exception. #### Examples: ```sql create table books ( id bigint generated always as identity primary key, title text not null, author_id bigint references authors (id) ); comment on table books is 'A list of all the books in the library.'; ``` ## Queries - When the query is shorter keep it on just a few lines. As it gets larger start adding newlines for readability - Add spaces for readability. Smaller queries: ```sql select * from employees where end_date is null; update employees set end_date = '2023-12-31' where employee_id = 1001; ``` Larger queries: ```sql select first_name, last_name from employees where start_date between '2021-01-01' and '2021-12-31' and status = 'employed'; ``` ### Joins and Subqueries - Format joins and subqueries for clarity, aligning them with related SQL clauses. - Prefer full table names when referencing tables. This helps for readability. ```sql select employees.employee_name, departments.department_name from employees join departments on employees.department_id = departments.department_id where employees.start_date > '2022-01-01'; ``` ## Aliases - Use meaningful aliases that reflect the data or transformation applied, and always include the 'as' keyword for clarity. ```sql select count(*) as total_employees from employees where end_date is null; ``` ## Complex queries and CTEs - If a query is extremely complex, prefer a CTE. - Make sure the CTE is clear and linear. Prefer readability over performance. - Add comments to each block. ```sql with department_employees as ( -- Get all employees and their departments select employees.department_id, employees.first_name, employees.last_name, departments.department_name from employees join departments on employees.department_id = departments.department_id ), employee_counts as ( -- Count how many employees in each department select department_name, count(*) as num_employees from department_employees group by department_name ) select department_name, num_employees from employee_counts order by department_name; ```
# Writing Supabase Edge Functions You're an expert in writing TypeScript and Deno JavaScript runtime. Generate **high-quality Supabase Edge Functions** that adhere to the following best practices: ## Guidelines 1. Try to use Web APIs and Deno’s core APIs instead of external dependencies (eg: use fetch instead of Axios, use WebSockets API instead of node-ws) 2. If you are reusing utility methods between Edge Functions, add them to `supabase/functions/_shared` and import using a relative path. Do NOT have cross dependencies between Edge Functions. 3. Do NOT use bare specifiers when importing dependecnies. If you need to use an external dependency, make sure it's prefixed with either `npm:` or `jsr:`. For example, `@supabase/supabase-js` should be written as `npm:@supabase/supabase-js`. 4. For external imports, always define a version. For example, `npm:@express` should be written as `npm:express@4.18.2`. 5. For external dependencies, importing via `npm:` and `jsr:` is preferred. Minimize the use of imports from @`deno.land/x` , `esm.sh` and @`unpkg.com` . If you have a package from one of those CDNs, you can replace the CDN hostname with `npm:` specifier. 6. You can also use Node built-in APIs. You will need to import them using `node:` specifier. For example, to import Node process: `import process from "node:process". Use Node APIs when you find gaps in Deno APIs. 7. Do NOT use `import { serve } from "https://deno.land/std@0.168.0/http/server.ts"`. Instead use the built-in `Deno.serve`. 8. Following environment variables (ie. secrets) are pre-populated in both local and hosted Supabase environments. Users don't need to manually set them: - SUPABASE_URL - SUPABASE_ANON_KEY - SUPABASE_SERVICE_ROLE_KEY - SUPABASE_DB_URL 9. To set other environment variables (ie. secrets) users can put them in a env file and run the `supabase secrets set --env-file path/to/env-file` 10. A single Edge Function can handle multiple routes. It is recommended to use a library like Express or Hono to handle the routes as it's easier for developer to understand and maintain. Each route must be prefixed with `/function-name` so they are routed correctly. 11. File write operations are ONLY permitted on `/tmp` directory. You can use either Deno or Node File APIs. 12. Use `EdgeRuntime.waitUntil(promise)` static method to run long-running tasks in the background without blocking response to a request. Do NOT assume it is available in the request / execution context. ## Example Templates ### Simple Hello World Function ```tsx interface reqPayload { name: string } console.info('server started') Deno.serve(async (req: Request) => { const { name }: reqPayload = await req.json() const data = { message: `Hello ${name} from foo!`, } return new Response(JSON.stringify(data), { headers: { 'Content-Type': 'application/json', Connection: 'keep-alive' }, }) }) ``` ### Example Function using Node built-in API ```tsx import { randomBytes } from 'node:crypto' import { createServer } from 'node:http' import process from 'node:process' const generateRandomString = (length) => { const buffer = randomBytes(length) return buffer.toString('hex') } const randomString = generateRandomString(10) console.log(randomString) const server = createServer((req, res) => { const message = `Hello` res.end(message) }) server.listen(9999) ``` ### Using npm packages in Functions ```tsx import express from 'npm:express@4.18.2' const app = express() app.get(/(.*)/, (req, res) => { res.send('Welcome to Supabase') }) app.listen(8000) ``` ### Generate embeddings using built-in @Supabase.ai API ```tsx const model = new Supabase.ai.Session('gte-small') Deno.serve(async (req: Request) => { const params = new URL(req.url).searchParams const input = params.get('text') const output = await model.run(input, { mean_pool: true, normalize: true }) return new Response(JSON.stringify(output), { headers: { 'Content-Type': 'application/json', Connection: 'keep-alive', }, }) }) ```
# State Management Standards for Supabase This document outlines coding standards for state management when building applications with Supabase. These standards aim to promote maintainability, performance, security, and a consistent development approach across projects. They are based on best practices and the capabilities of the latest Supabase features. ## 1. Choosing the Right State Management Approach Selecting the correct state management strategy is crucial for scalability and maintainability. Consider the complexity of your application when deciding on a suitable method. Simpler applications may benefit from simpler solutions, while larger applications will require more robust patterns. **Standards:** * **Do This:** Begin by assessing your application's complexity to determine the appropriate state management level. React Context or Vue's provide/inject might suffice for simple apps dealing with user authentication. For more complex scenarios, investigate Redux, Zustand, or XState. Consider the trade-offs between complexity, boilerplate, and performance when making your choice. * **Don't Do This:** Immediately jump to a complex library like Redux for a small application. Over-engineering can lead to unnecessary complexity and maintenance overhead. Similarly, don't rely solely on prop drilling for deep component hierarchies, as this reduces maintainability. * **Why:** Selecting the right balance avoids over-complicating simple apps or creating unwieldy complex apps with poor state management. **Example (React Context for Auth State):** """jsx // AuthContext.jsx import React, { createContext, useState, useEffect, useContext } from 'react'; import { supabase } from './supabaseClient'; // Ensure correct initialization const AuthContext = createContext(); export const AuthProvider = ({ children }) => { const [session, setSession] = useState(null); const [user, setUser] = useState(null); const [loading, setLoading] = useState(true); useEffect(() => { supabase.auth.getSession() .then(({ data: { session } }) => { setSession(session); setUser(session?.user ?? null); }) .finally(() => setLoading(false)); supabase.auth.onAuthStateChange((_event, session) => { setSession(session); setUser(session?.user ?? null); }); }, []); const value = { session, user, signIn: (email, password) => supabase.auth.signInWithPassword({ email, password }), signUp: (email, password) => supabase.auth.signUp({ email, password }), signOut: () => supabase.auth.signOut(), loading }; return ( <AuthContext.Provider value={value}> {!loading && children} {/* Conditionally render children after initial auth check */} </AuthContext.Provider> ); }; export const useAuth = () => { return useContext(AuthContext); }; """ """jsx // _app.js or index.js import { AuthProvider } from '../contexts/AuthContext'; function MyApp({ Component, pageProps }) { return ( <AuthProvider> <Component {...pageProps} /> </AuthProvider> ); } export default MyApp; """ **Anti-Pattern:** Using "useState" and "useEffect" in every component that requires authentication information instead of using React Context. This leads to redundant code and makes it difficult to manage authentication state centrally. ## 2. Real-time Data Synchronization Supabase shines with its real-time capabilities. Leveraging these effectively requires careful consideration of state management. **Standards:** * **Do This:** Use Supabase's real-time subscriptions to automatically update application state when data changes in the database. Handle potential race conditions and errors gracefully. Employ optimistic updates for quicker perceived performance. * **Don't Do This:** Poll the database periodically for updates. This is inefficient and can quickly exhaust your Supabase resources. Additionally, avoid directly mutating the state without considering the implications of asynchronous updates, which leads to inconsistencies. * **Why:** Real-time subscriptions offer efficient, near-instantaneous data synchronization, enhancing user experience and minimizing resource consumption. **Example (Real-time Subscription with React):** """jsx import React, { useState, useEffect } from 'react'; import { supabase } from './supabaseClient'; function RealtimeMessages() { const [messages, setMessages] = useState([]); useEffect(() => { const subscription = supabase .channel('public:messages') .on('postgres_changes', { event: '*', schema: 'public', table: 'messages' }, (payload) => { // Optimistically update the UI if (payload.eventType === 'INSERT') { setMessages(prevMessages => [...prevMessages, payload.new]); } else if (payload.eventType === 'UPDATE') { setMessages(prevMessages => prevMessages.map(message => message.id === payload.new.id ? payload.new : message) ); } else if (payload.eventType === 'DELETE') { setMessages(prevMessages => prevMessages.filter(message => message.id !== payload.old.id)); } }) .subscribe() return () => { supabase.removeChannel(subscription); }; }, []); useEffect(() => { // Initial load of messages async function fetchMessages() { const { data, error } = await supabase .from('messages') .select('*') .order('created_at', { ascending: false }); if (error) { console.error('Error fetching messages:', error); } else { setMessages(data); } } fetchMessages(); }, []); return ( <ul> {messages.map(message => ( <li key={message.id}>{message.content}</li> ))} </ul> ); } export default RealtimeMessages; """ **Explanation:** * This example subscribes to changes in the "messages" table. The "on" method is called whenever a new row is inserted, updated, or deleted. * The "setMessages" function is used to update the state with the new data. * The "return" statement in "useEffect" unsubscribes from the channel when the component unmounts, preventing memory leaks. The use of "supabase.removeChannel(subscription)" is crucial for completely terminating the real-time connection. * An initial "fetchMessages" call is done to load existing data. * Optimistic updates are implemented in the event that the UI is updated immediately, rather than waiting for confirmation. **Anti-Pattern:** Ignoring the "unsubscribe" function returned by "supabase.channel('...').subscribe()". This will lead to memory leaks and potentially excessive database load as stale subscriptions remain active. ## 3. Data Fetching and Caching Efficient data fetching and caching significantly impact application performance. **Standards:** * **Do This:** Use "supabase.from('table').select('*')" for fetching data. Implement client-side caching using libraries like "react-query" or "swr" to reduce network requests for frequently accessed data. Stale-while-revalidate (SWR) is a very effective strategy. Consider using Supabase Edge Functions for complex data transformations or aggregations *before* the data reaches the client. * **Don't Do This:** Fetch all data upfront when only a subset is required. This wastes bandwidth and slows down initial load times. Avoid directly manipulating state inside the "useEffect" hook's callback without considering dependencies. * **Why:** Caching minimizes network requests, enhancing speed and user experience. Selective data fetching conserves resources. **Example (Data Fetching with "react-query"):** """jsx import React from 'react'; import { useQuery } from 'react-query'; import { supabase } from './supabaseClient'; async function fetchPosts() { const { data, error } = await supabase .from('posts') .select('*') .order('created_at', { ascending: false }); if (error) { throw new Error(error.message); } return data; } function Posts() { const { data, error, isLoading } = useQuery('posts', fetchPosts); if (isLoading) return <p>Loading...</p>; if (error) return <p>Error: {error.message}</p>; return ( <ul> {data.map(post => ( <li key={post.id}>{post.title}</li> ))} </ul> ); } export default Posts; """ **Explanation:** * "useQuery" automatically handles caching, background updates, and error retries. * The "posts" key is used to identify and cache the query result. "react-query" intelligently invalidates the cache when necessary. * The "fetchPosts" function encapsulates fetching post data from Supabase and handling errors. **Anti-Pattern:** Repeatedly fetching the same data from Supabase without caching. This leads to unnecessary network requests and poor application Performance. Using "localStorage" directly for critical data without proper security considerations (e.g., encryption) is dangerous. ## 4. Optimistic Updates and Conflict Resolution Optimistic updates improve perceived performance, but require handling potential conflicts. **Standards:** * **Do This:** Implement optimistic updates by updating the UI immediately after a user action, *before* confirming the update with Supabase. Implement robust error handling to revert the UI if the update fails. Use Supabase's row-level security (RLS) to prevent unauthorized data manipulation, which reduces the chances of conflicts. Use "try...catch" blocks around supabase calls. * **Don't Do This:** Wait for the server response before updating the UI. This creates a laggy user experience. Assume that all updates will succeed without implementing error handling. * **Why:** Optimistic updates make the application feel more responsive, while proper conflict resolution maintains data integrity. **Example (Optimistic Update with "react-query"):** """jsx import React from 'react'; import { useMutation, useQueryClient } from 'react-query'; import { supabase } from './supabaseClient'; async function updatePost(postId, newTitle) { const { error } = await supabase .from('posts') .update({ title: newTitle }) .eq('id', postId); if (error) { throw new Error(error.message); } } function EditPost({ postId, initialTitle }) { const queryClient = useQueryClient(); const mutation = useMutation( ({ postId, title }) => updatePost(postId, title), { onMutate: async (newPost) => { // Cancel any outgoing refetches (so they don't overwrite our optimistic update) await queryClient.cancelQueries('posts'); // Snapshot the previous value const previousPosts = queryClient.getQueryData('posts'); // Optimistically update to the new value queryClient.setQueryData('posts', (old) => old.map((post) => post.id === postId ? { ...post, title: newPost.title } : post ) ); // Return a context object with the snapshotted value return { previousPosts }; }, onError: (err, newPost, context) => { queryClient.setQueryData('posts', context.previousPosts); }, onSettled: () => { queryClient.invalidateQueries('posts'); }, } ); const handleSubmit = async (event) => { event.preventDefault(); const newTitle = event.target.title.value; mutation.mutate({ postId, title: newTitle }); }; return ( <form onSubmit={handleSubmit}> <input type="text" name="title" defaultValue={initialTitle} /> <button type="submit">Update</button> {mutation.isLoading ? 'Updating...' : null} {mutation.isError ? "An error occurred: ${mutation.error.message}" : null} </form> ); } export default EditPost; """ **Explanation:** * "useMutation" is used to handle the update operation. * "onMutate" is used to optimistically update the UI before the server responds. It also caches the previous value to revert in case of an error. * "onError" is used to revert the UI if the update fails. * "onSettled" is used to invalidate the cache after the update completes. **Anti-Pattern:** Failing to revert the UI after an optimistic update fails leads to a broken user experience where displayed data doesn't match the database state. ## 5. Row-Level Security (RLS) and Data Integrity RLS is paramount for securing your Supabase data. **Standards:** * **Do This:** Implement RLS policies on all tables to control data access based on user roles and permissions. Use the "auth.uid()" function in RLS policies to restrict access to data based on the authenticated user. Validate data on the client-side, but *always* enforce validation on the server-side using RLS and database constraints. * **Don't Do This:** Rely solely on client-side validation for data integrity; client-side validation can be bypassed. Grant overly permissive access to data via RLS; follow the principle of least privilege. * **Why:** RLS prevents unauthorized data access and manipulation, ensuring data integrity. **Example (RLS Policy):** """sql -- Enable RLS on the 'posts' table alter table public.posts enable row level security; -- Create a policy that allows users to select only their own posts create policy "Users can select their own posts" on public.posts for select using (auth.uid() = user_id); -- Create a policy that allows users to insert posts, setting the user_id to their own create policy "Users can insert posts" on public.posts for insert with check (auth.uid() = user_id); -- Create a policy that allows users to update their own posts create policy "Users can update their own posts" on public.posts for update using (auth.uid() = user_id); -- Create a policy that allows users to delete their own posts create policy "Users can delete their own posts" on public.posts for delete using (auth.uid() = user_id); """ **Explanation:** * This example demonstrates RLS policies that restrict access to posts based on the "user_id" column. * Users can only select, insert, update, and delete posts where their "auth.uid()" matches the "user_id". **Anti-Pattern:** Disabling RLS or creating overly broad policies exposes your data to unauthorized access, potentially leading to data breaches or manipulation. ## 6. State Management Libraries and Supabase Integrating state management libraries with Supabase requires careful consideration of data flow and reactivity. **Standards:** * **Do This:** Use libraries such as Zustand, Redux Toolkit, or Vuex to manage complex application state. When integrating with Supabase's real-time capabilities, create action creators or mutations that dispatch updates to the store based on real-time events. Consider using thunks or sagas for asynchronous operations like data fetching and updates. * **Don't Do This:** Directly mutate the store state within component event handlers. This bypasses the state management library's control and makes debugging difficult. Use excessive global state when component-level state would suffice. * **Why:** State management libraries provide a predictable and centralized way to manage application state, improving maintainability and testability. **Example (Zustand with Supabase Realtime):** """javascript import { create } from 'zustand'; import { supabase } from './supabaseClient'; const useMessageStore = create((set, get) => ({ messages: [], loading: false, error: null, fetchMessages: async () => { set({ loading: true, error: null }); try { const { data, error } = await supabase .from('messages') .select('*') .order('created_at', { ascending: false }); if (error) { throw error; } set({ messages: data, loading: false }); } catch (error) { set({ error: error.message, loading: false }); } }, subscribeToMessages: () => { supabase .channel('public:messages') .on('postgres_changes', { event: '*', schema: 'public', table: 'messages' }, (payload) => { if (payload.eventType === 'INSERT') { set({ messages: [payload.new, ...get().messages] }); // Correctly prepend new message } else if (payload.eventType === 'UPDATE') { set(state => ({ messages: state.messages.map(message => message.id === payload.new.id ? payload.new : message ) })); } else if (payload.eventType === 'DELETE') { set(state => ({ messages: state.messages.filter(message => message.id !== payload.old.id) })); } }) .subscribe(); }, })); export default useMessageStore; // Usage in a component: import useMessageStore from './messageStore'; import { useEffect } from 'react'; function MessageList() { const { messages, loading, error, fetchMessages, subscribeToMessages } = useMessageStore(); useEffect(() => { fetchMessages(); subscribeToMessages(); // Optionally unsubscribe if needed (more complex example) // return () => supabase.removeChannel(...); }, [fetchMessages, subscribeToMessages]); if (loading) return <p>Loading...</p>; if (error) return <p>Error: {error}</p>; return ( <ul> {messages.map(message => ( <li key={message.id}>{message.content}</li> ))} </ul> ); } """ **Explanation:** * Zustand is a lightweight state management library. * "create" is used to define the store's state and actions. * "fetchMessages" fetches messages from Supabase and updates the store. * "subscribeToMessages" subscribes to real-time updates from Supabase and updates the store accordingly. * Actions update the store state via "set()". * The "get()" function is used to retrieve the current state within the store. **Anti-Pattern:** Creating overly complex reducers or actions that are difficult to understand and maintain. Neglecting to unsubscribe from real-time events when the component unmounts leads to memory leaks. ## 7. Error Handling and Resilience Robust error handling is critical for building reliable applications. **Standards:** * **Do This:** Wrap Supabase API calls in"try...catch" blocks to handle potential errors. Display user-friendly error messages to the user. Implement retry mechanisms for transient errors. Log errors to a central logging service for analysis. Use Supabase's "gotrue" error codes to provide more specific feedback to the users. * **Don't Do This:** Ignore errors or display generic error messages. Allow unhandled exceptions to crash the application. Expose sensitive error information (e.g., database connection strings) to the user. * **Why:** Proper error handling prevents application crashes, provides informative feedback to the user, and facilitates debugging. **Example (Error Handling):** """javascript import { supabase } from './supabaseClient'; async function createPost(title, content) { try { const { data, error } = await supabase .from('posts') .insert([{ title, content }]); if (error) { // Handle Supabase-specific errors if (error.code === '23505') { // Unique constraint violation alert('A post with this title already exists.'); } else { console.error('Supabase error:', error); // Log for debugging alert('Failed to create post. Please try again.'); // User-friendly message } return null; // Indicate failure } return data; // Return the created post data } catch (e) { // Handle network or other unexpected errors console.error('Unexpected error:', e); alert('An unexpected error occurred. Please try again later.'); return null; } } // Usage: createPost("My Post Title", "My Post Content") .then(newPost => { if (newPost) { console.log("Post created successfully!", newPost); } else { console.log("Post creation failed."); } }); """ **Explanation :** * The "createPost" function wraps the Supabase API call in a "try...catch" block. * If an error occurs, the "catch" block logs the error to the console and displays an and specific user-friendly message in the UI. Checks for sql error codes are included to handle specific error types, such as duplicate content. **Anti-Pattern:** Using "console.log" for error reporting in production. This is inadequate for tracking and addressing issues. Failing to implement proper logging mechanisms makes it difficult to diagnose and resolve issues in production environments. ## 8. Transactions Transactions are essential for guaranteeing data consistency when performing multiple operations. **Standards:** * **Do This:** Use Supabase's database functions or stored procedures to encapsulate multiple database operations within a transaction. This ensures that all operations either succeed or fail together, maintaining data integrity. Use "BEGIN", "COMMIT", and "ROLLBACK" statements within the function. * **Don't Do This:** Perform multiple database operations independently without a transaction. This can lead to inconsistent data if one operation fails. * **Why:** Transactions guarantee atomicity, consistency, isolation, and durability (ACID properties), preventing data corruption and ensuring reliable data updates. **Example (Database Function with Transaction):** """sql -- Drop the function if it already exists DROP FUNCTION IF EXISTS transfer_funds(sender_id uuid, receiver_id uuid, amount numeric); -- Create a function to transfer funds between two accounts CREATE OR REPLACE FUNCTION transfer_funds(sender_id uuid, receiver_id uuid, amount numeric) RETURNS VOID AS $$ DECLARE sender_balance numeric; BEGIN -- Check if sender has sufficient balance SELECT balance INTO sender_balance FROM accounts WHERE id = sender_id; IF sender_balance < amount THEN RAISE EXCEPTION 'Insufficient funds'; END IF; -- Begin the transaction BEGIN -- Deduct amount from sender's account UPDATE accounts SET balance = balance - amount WHERE id = sender_id; -- Add amount to receiver's account UPDATE accounts SET balance = balance + amount WHERE id = receiver_id; -- Commit the transaction COMMIT; EXCEPTION WHEN OTHERS THEN -- Rollback the transaction in case of any error ROLLBACK; RAISE; -- Re-raise the exception to be caught by the caller END; END; $$ LANGUAGE plpgsql SECURITY DEFINER; """ **Explanation:** * The function "transfer_funds" takes the sender's ID, receiver's ID, and the amount to transfer as input. * It first checks if the sender has sufficient balance. * It then begins a transaction, deducts the amount from the sender's account and adds it to the receiver's account. * If any error occurs during the transaction, it rolls back the transaction to maintain data consistency. **Anti-Pattern:** Performing financial transactions *outside* of a database transaction. This could lead to funds being deducted from one account but not credited to another, resulting in a data inconsistency. These coding standards provide a solid foundation for building robust, scalable, and secure applications with Supabase, particularly in the area of state management. By adhering to these standards, development teams can ensure consistency, maintainability, and optimal performance across their projects.