1,000+ Opportunities
Find the right grant
Search federal, foundation, and corporate grants with AI — or browse by agency, topic, and state.
This listing may be outdated. Verify details at the official source before applying.
Find similar grantsResponsible AI Safety and Education Act (RAISE Act) is sponsored by State of New York. Imposes transparency, safety, and reporting requirements on developers of large frontier artificial intelligence models.
Get alerted about grants like this
Save a search for “State of New York” or related topics and get emailed when new opportunities appear.
Search similar grants →Extracted from the official opportunity page/RFP to help you evaluate fit faster.
The New York Responsible AI Safety and Education (RAISE) Act: What you need to know | Global law firm | Norton Rose Fulbright On December 19, 2025, Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act. The RAISE Act regulates "frontier" artificial intelligence (AI) models by imposing obligations on their developers.
Among other things, the RAISE Act establishes safety, transparency, testing and incident‑reporting obligations along with potential civil penalties for violations of those obligations. The RAISE Act was signed right after the federal government issued an executive order 1 aimed at regulating state AI laws and at the same time that other states like Texas are about to begin enforcing their own AI governance statutes.
2 Because the RAISE Act takes effect on March 19, 2026, 3 covered companies must act promptly to meet its new obligations. This article is focused on helping companies make those preparations and identifying some of the RAISE Act’s relevant obligations.
It begins with a summary of the RAISE Act’s key provisions and what you should consider doing before the RAISE Act takes effect, follows with a more detailed discussion of the same and concludes with key takeaways and a look toward the future. The RAISE Act takes effect on March 19, 2026.
The RAISE Act’s obligations only apply to “large developers” of “frontier models” developed, deployed or operating in whole or in part in New York. A “large developer” means anyone who has trained a frontier model and spent over US$100 million, in aggregate, on compute costs while training a frontier model.
A “frontier model” means either (a) an AI model trained using greater than 10^26 computational operations with a compute cost exceeding US$100 million, or (b) an AI model produced by applying knowledge distillation (a supervised learning technique that uses a larger AI model or its output to train a smaller AI model) to such a model, provided the compute cost for the distilled model exceeds US$5 million.
The RAISE Act applies to “frontier models” that are “developed, deployed or operating in whole or in part in New York” which casts a wide jurisdictional net and can be triggered even when only a component of model operations or deployment intersects with New York. NY Gen. Bus. L.
§ 1424. This means that even partial operational touchpoints may subject a large developer to the RAISE Act’s requirements. The RAISE Act requires those large developers of frontier models to do all of the following before deploying a frontier model: Implement a written set of safety and security protocols.
Retain an unredacted copy of the those protocols, publish a redacted copy of those protocols, transmit that redacted copy to the New York Attorney General and New York Division of Homeland Security and Emergency Services, and grant access to those protocols upon request. Record and maintain information on tests and test results used in assessing the frontier model’s safety and security.
Implement safeguards to prevent unreasonable risk of critical harm. The RAISE Act also requires large developers to: Conduct an annual review of their safety and security protocols and make any necessary modifications. Disclose any safety incidents to the New York Attorney General and New York Division of Homeland Security and Emergency Services within 72 hours of learning of the incident.
Refrain from deployments that would create an unreasonable risk of “critical harm. ” The RAISE Act does not create a private right of action; it is only enforceable by the New York Attorney General. Violations of the RAISE Act can result in civil penalties of up to US$10 million for a first violation and up to US$30 million for subsequent violations.
The RAISE Act focuses on “large developers” and the risk of “critical harm” The RAISE Act imposes obligations on “large developers,” which is defined under the RAISE Act as “a person who has trained at least one frontier model and spent over US$100 million in aggregate on compute costs . . .
” for training frontier models. “Person” is also defined as “an individual, proprietorship, firm, partnership, joint venture, syndicate, business trust, company, corporation, limited liability company, association, committee or any other nongovernmental organization or group of persons acting in concert. ” Only accredited colleges and universities engaged in academic research are exempt.
NY Gen. Bus. L. §§ 1420(9) and (11).
The RAISE Act also assigns these “large developer” obligations to anyone that receives the frontier model from the original developer, if full intellectual property rights are transferred and none are retained by the original developer, meaning a company cannot avoid these obligations or penalties by simply buying a completed frontier model or creating an acquisition structure around the RAISE Act. See id .
The RAISE Act doesn’t just focus on “frontier models,” it also includes any other models that are produced by applying “knowledge distillation” from a frontier model where such distillation costs exceeds US$5 million. NY Gen. Bus. L.
§§ 1420(6)-(8). “Knowledge distillation” is defined as “any supervised learning technique that uses a larger artificial intelligence model or the output of a larger artificial intelligence model to train a smaller artificial intelligence model with similar or equivalent capabilities as the larger artificial intelligence model. ” NY Gen. Bus.
L. § 1420(8). This is an important provision to understand because while the RAISE Act suggests that larger developers transfer compliance obligations under the RAISE Act by selling their frontier models to a third party, this provision keeps them within the scope of the same assuming they are using frontier models to train smaller models that they retain.
The RAISE Act’s obligations and penalties aim to prevent “critical harm,” which is defined as the death or serious injury of one hundred or more people, or at least one billion dollars in property damage, caused or materially enabled by a large developer’s use, storage or release of a frontier model, including enabling either (i) the creation or use of a chemical, biological, radiological or nuclear weapon or (ii) a model engaging in criminal conduct without meaningful human interaction where the crime requires intent, recklessness or gross negligence, including solicitation or aiding and abetting of such a crime.
NY Gen. Bus. L. § 1420(7).
An important provision included within the critical harm provisions limits a developer's liability for harms “inflicted” by an “intervening human actor,” unless: 1) the developer's activities were a “substantial factor in bringing about the harm;” 2) the intervening actor’s conduct was reasonably foreseeable as a “probable consequence of the developer's activities;” and 3) the harm could have been reasonably prevented or mitigated through alternative design, security measures or safety protocols.
NY Gen. Bus. L. § 1420(7)(b)(ii).
The RAISE Act’s obligations for large developers: Safety and security protocols, appropriate safeguards, annual reviews and seventy-two hour reporting period The RAISE Act mandates transparency through required safety and security protocols.
The RAISE Act defines a "safety and security protocol" as documented technical and organizational protocols that: (a) describe reasonable protections and procedures to reduce the risk of critical harm; (b) describe reasonable administrative, technical and physical cybersecurity protections to reduce the risk of unauthorized access or misuse leading to critical harm; (c) detail testing procedures to evaluate whether the frontier model poses an unreasonable risk of critical harm or could be misused, modified or combined with other software in ways that increase such risk; (d) enable compliance with the RAISE Act's requirements; and (e) designate senior personnel responsible for ensuring compliance.
Therefore, before deploying a frontier model, the RAISE Act requires that a large developer must: Implement a written safety and security protocol to prevent unreasonable risk of critical harm. Retain an unredacted copy with version history for the duration of the model’s deployment plus five years.
Publish a suitably redacted version and transmit it to the New York Attorney General and the New York Division of Homeland Security and Emergency Services. Ensure the Attorney General can access the protocol with only legally required redactions upon request.
Record and retain, for the duration of the model’s deployment plus five years, information on tests and test results with sufficient detail for third parties to replicate the testing procedure. Implement appropriate safeguards to prevent unreasonable risk of critical harm. NY Gen. Bus.
L. § 1421(1). The RAISE Act also prohibits large developers from deploying a frontier model if doing so would create an unreasonable risk of critical harm.
NY Gen. Bus. L. § 1421(2).
A large developer must also conduct an annual review of safety protocols to account for capability changes and industry best practices and must republish the protocol if materially modified. NY Gen. Bus. L.
§ 1421(3). The developer must also disclose each “safety incident” that occurs with its frontier model to the Attorney General and the Division of Homeland Security and Emergency Services within seventy‑two hours of learning of the incident, or learning of facts sufficient to reasonably believe one has occurred. NY Gen. Bus.
L. § 1421(4). The RAISE Act defines a “safety incident” as a known instance of critical harm, or an incident that occurs in a way that provides demonstrable evidence of the increased risks of critical harm, such as the model engaging in autonomous behavior without user request, theft/misuse/escape of model weights, critical failures of technical or administrative controls or unauthorized model use.
NY Gen. Bus. L. § 1420(13).
Each disclosure of a safety incident must include the date, basis for qualification as a safety incident and a short description. NY Gen. Bus. L.
§ 1421(4). This means that large developers must closely monitor their covered AI models for these types of incidents, but also that large developers should review and update their cybersecurity reporting obligations as well because the broad definition of safety incident could also described a cybersecurity incident impacting the AI model.
The RAISE Act also specifically prohibits false or materially misleading statements or omissions regarding documents produced under these requirements, whether those be the safety and security protocols or the safety incident disclosures. NY Gen. Bus. L.
§ 1421(5).
The RAISE Act does, however, permit "appropriate redactions" to published safety and security protocols when necessary to: (a) protect public safety to the extent the developer can reasonably predict such risks; (b) protect trade secrets; (c) prevent release of confidential information required by state or federal law; (d) protect employee or customer privacy; or (e) prevent release of information otherwise controlled by state or federal law.
The RAISE Act defines "trade secret" consistent with standard formulations, requiring both independent economic value from secrecy and reasonable efforts to maintain that secrecy. Enforcement of the RAISE Act: Penalties and remedies There is no private right of action under the RAISE Act. NY Gen. Bus.
L. § 1422(2). The Attorney General may bring civil actions for any violations of the above requirements, and can seek penalties of up to US$10 million for a first violation and US$30 million for subsequent violations, as well as injunctive or declaratory relief.
NY Gen. Bus. L. § 1422(1).
The RAISE Act directs that penalty amounts be determined based on the severity of the violation. NY Gen. Bus. L.
§ 1422. Conclusion: Practical takeaways for companies New York’s RAISE Act imposes significant obligations on developers of advanced AI systems, obligations that are backed by serious enforcement mechanisms and potential penalties, all of which will be enacted on a short timeline.
While the courts will determine any effect of the federal government’s recent executive order on the RAISE Act, companies that may qualify as "large developers” with any New York touchpoints in their development, deployment or operations should address the RAISE Act's requirements immediately.
Accordingly, we recommend your company consider taking at least the following steps: Determine whether your company is a “large developer” with a covered “frontier model” by assessing training compute costs—including any downstream distilled models—and identifying whether any New York connection exists in development, deployment or operation.
If your company is covered, write and implement a safety and security protocol that meets the RAISE Act’s content requirements, including detailed testing methodologies designed to evaluate unreasonable risk and foreseeable misuse, and implement appropriate technical, administrative and physical safeguards.
Prepare for publication and transmission mechanics, with defensible “appropriate redactions” limited to safety, trade secrets, legally required confidentiality and privacy, and maintain unredacted versions with version history.
Establish incident response procedures and consider updating your cyber incident response play to identify potential safety incidents promptly and meet the RAISE Act’s 72-hour reporting deadline, including protocols for escalating information to personnel authorized to make required disclosures.
Conduct annual reviews against changing capabilities and industry best practices, republishing when material changes are made, and ensure all submissions and disclosures are accurate and non‑misleading. Our team will continue to monitor developments related to the RAISE Act closely. For advice about the RAISE Act’s practical implications or representation related to the RAISE Act, please contact our team.
See The federal government weighs in on artificial intelligence governance: What you need to know . See The Texas Responsible AI Governance Act: What your company needs to know before January 1 . See NY Gen. Bus.
L. § 3. Head of Litigation and Disputes, Austin marc.
collier@nortonrosefulbright. com ethan. glenn@nortonrosefulbright.
com isabela. pena-gonzalez@nortonrosefulbright. com charlotte.
swart@nortonrosefulbright. com Artificial intelligence (AI) Cybersecurity and data privacy Navigating the DFSA's new AML expectations: Key takeaways from the FAQs On March 2, 2026, the Dubai Financial Services Authority’s (DFSA) updated Glossary (GLO) and Anti-Money Laundering, Counter-Terrorist Financing and Sanctions (AML) modules came into effect.
Financial services and regulation DOL proposed investment selection Safe Harbor: A deeper dive The US Department of Labor’s Employee Benefits Security Administration has proposed a new rule titled "Fiduciary Duties in Selecting Designated Investment Alternatives."
Protecting genetic information is no joke: Australian Parliament bans use of genetic test results in life insurance On 1 April 2026, an inauspicious date given the gravity of the subject matter, the Australian Parliament passed the Treasury Laws Amendment (Genetic Testing Protections in Life Insurance and Other Measures) Act 2026 (the Act). Visit our global site , or select a location
Based on current listing details, eligibility includes: Developers of AI models in New York State. Applicants should confirm final requirements in the official notice before submission.
Current published award information indicates Varies Always verify allowable costs, matching requirements, and funding caps directly in the sponsor documentation.
The current target date is rolling deadlines or periodic funding windows. Build your timeline backwards from this date to cover registrations, approvals, attachments, and final submission checks.
Federal grant success rates typically range from 10-30%, varying by agency and program. Build a strong proposal with clear objectives, measurable outcomes, and a well-justified budget to improve your chances.
Requirements vary by sponsor, but typically include a project narrative, budget justification, organizational capability statement, and key personnel CVs. Check the official notice for the complete list of required attachments.
Yes — AI tools like Granted can help research funders, draft proposal sections, and check compliance. However, always review and customize AI-generated content to reflect your organization's unique strengths and the specific requirements of the solicitation.
Review timelines vary by funder. Federal agencies typically take 3-6 months from submission to award notification. Foundation grants may be faster, often 1-3 months. Check the program's timeline in the official solicitation for specific dates.
Many federal programs offer multi-year funding or allow competitive renewals. Check the official solicitation for continuation and renewal policies. Non-competing continuation applications are common for multi-year awards.
New York Places for Learning, Activity, and Youth Socialization (NY PLAYS) is a grant from Empire State Development that funds the creation and improvement of recreational and youth-oriented community spaces across New York State. The program supports construction, renovation, and equipment purchases for facilities providing structured activities and socialization opportunities for young people. Eligible applicants include municipalities, nonprofits, and community-based organizations. Projects must demonstrate a commitment to serving youth in underserved or high-need communities.
NYC Elevating Business Loan Program is sponsored by The Social Justice Fund, The Asian American Foundation (TAAF), Renaissance Economic Development Corporation (Renaissance), and the State of New York (Empire State Development). This initiative provides affordable loans, free financial counseling, and multilingual training services to help small businesses in New York City's five boroughs remain strong, competitive, and positioned for long-term growth. It focuses on expanding access to capital for entrepreneurs. Businesses in Brooklyn are particularly eligible due to the Social Justice Fund's investment.
Research on Circular Economy, Smart Manufacturing, and Energy-Efficient Microelectronics is sponsored by U.S. Department of Energy (DOE) Advanced Materials & Manufacturing Technologies Office (AMMTO). This funding opportunity supports innovative technology R&D across the manufacturing sector with a focus on circular economy, smart manufacturing, and energy-efficient microelectronics. While the stated deadline for full applications has passed, AMMTO frequently issues similar solicitations, and this highlights a relevant area of interest for the DOE.
America's Seed Fund (SBIR/STTR) - Cybersecurity and Authentication is sponsored by U.S. National Science Foundation (NSF). Supports startups and small businesses to translate research into products and services, including cybersecurity and authentication, to secure national defense and protect the public. Includes research requiring privacy and security-preserving resources for artificial intelligence.