Daniel Kliewer

Developing a Toxicity Detection Communication App: Promoting Positive Dialogue with AI Ethics, React, and TensorFlow.js

8 min read
ReactTensorFlow.jsToxicity ModelGraph VisualizationAI EthicsNLPMachine LearningPositive Communication

Image

Learning From the Past and Building a Better Future Through Technology

Our world stands at a critical juncture. We’ve witnessed how the unchecked pursuit of strategic advantage, from mid-20th century conflicts to the present day, can normalize moral compromises. The history of warfare, internment camps, and the nuclear arms race taught us a lesson: the ends do not justify the means. Yet here we are again, seeing advanced technologies—drones, AI, cryptographic frameworks—funneled into machines of war. We see hypocrisy in how international rules are applied selectively, and we see how the very frameworks meant to maintain peace can be bent or broken for short-term gain.

But technology doesn’t have to serve destruction. Just as the same drone technology can be repurposed to improve healthcare delivery in remote areas, or augmented reality can train doctors more efficiently, the tools we create can heal rather than harm. This choice—how we apply our technology—is ours to make. We can build a future where AI supports better communication, encourages empathy, and guides us toward more conscientious behavior.

This brings us to today’s project: a small web application that uses machine learning and a graph-based data structure to help people communicate more positively. Instead of guiding deadly precision strikes, this codebase is designed to guide more constructive dialogue. By highlighting and mapping out potentially hurtful language, the app nudges us toward healthier, more uplifting forms of expression.

We’re acknowledging our past failures and choosing a different path forward. This app, while small and symbolic, is a testament to the idea that we can use the most advanced tools at our disposal to cultivate empathy rather than enmity. We can support each other by learning from the past and building a kinder digital world, one line of code at a time.


Guide: Building the “PositiveWords Graph” Application

Goal:
Set up a React application that integrates a toxicity-detection ML model and visualizes user input as a weighted graph of words. This encourages more thoughtful communication and leverages technology to improve the world in a small but meaningful way.

Key Features:

  • A React front-end that allows the user to enter a message.
  • Integration with a pre-trained TensorFlow.js toxicity model to detect harmful language.
  • A graph representation of the user’s text where nodes are words and edges represent adjacency and frequency, highlighting potentially problematic areas.
  • Deployment capability via Git and Netlify so changes can be easily pushed live.

Prerequisites

  • Node.js and npm installed (verify with node -v and npm -v)
  • Git installed (verify with git --version)
  • A GitHub account for version control
  • A Netlify account for free deployment

Step-by-Step Instructions

1. Create a New React App
Use create-react-app for quick setup.

Bash
# Navigate to your projects directory
cd /path/to/projects

# Create a new React app
npx create-react-app positivewords-graph

2. Move Into the Project and Install Dependencies

Bash
cd positivewords-graph
npm install @tensorflow/tfjs @tensorflow-models/toxicity

3. Replace the Default Code With Our Custom Code

  • Open the project in your code editor.
  • Replace src/App.js and src/App.css with the provided code below.
  • Ensure src/index.js and package.json match the provided snippets.

package.json (already created by create-react-app, just ensure dependencies are present):

JSON
{
  "name": "positivewords-graph",
  "version": "1.0.0",
  "private": true,
  "dependencies": {
    "@tensorflow-models/toxicity": "^1.2.2",
    "@tensorflow/tfjs": "^4.0.0",
    "react": "^18.0.0",
    "react-dom": "^18.0.0",
    "react-scripts": "5.0.0"
  },
  "scripts": {
    "start": "react-scripts start",
    "build": "react-scripts build"
  }
}

src/index.js:

JavaScript
import React from 'react';
import ReactDOM from 'react-dom/client';
import App from './App';
import './App.css';

const root = ReactDOM.createRoot(document.getElementById('root'));
root.render();

src/App.js:

JavaScript
import React, { useState, useEffect } from 'react';
import * as tf from '@tensorflow/tfjs';
import { load } from '@tensorflow-models/toxicity';
import './App.css';

function buildGraphFromText(text, toxicWords) {
  const words = text
    .toLowerCase()
    .replace(/[^\w\s]/gi, '')
    .split(/\s+/)
    .filter(w => w.trim().length > 0);
  
  const nodes = {};
  const edges = {};

  words.forEach(w => {
    if (!nodes[w]) {
      nodes[w] = { word: w, toxicityWeight: toxicWords.includes(w) ? 1 : 0 };
    }
  });

  for (let i = 0; i < words.length - 1; i++) {
    const a = words[i];
    const b = words[i + 1];
    const key = a < b ? `${a}-${b}` : `${b}-${a}`;
    if (!edges[key]) {
      edges[key] = { a, b, weight: 0 };
    }
    edges[key].weight += 1;
  }

  return { nodes: Object.values(nodes), edges: Object.values(edges) };
}

function App() {
  const [model, setModel] = useState(null);
  const [inputText, setInputText] = useState('');
  const [analysis, setAnalysis] = useState(null);
  const threshold = 0.9;
  
  useEffect(() => {
    load(threshold).then(m => {
      setModel(m);
    });
  }, [threshold]);

  const analyzeText = async () => {
    if (!model || !inputText) return;
    const predictions = await model.classify([inputText]);
    setAnalysis(predictions);
  };

  const handleChange = (e) => {
    setInputText(e.target.value);
  };

  const getToxicWords = () => {
    if (!analysis) return [];
    const toxicLabels = analysis.filter(pred => pred.results[0].match === true);
    if (toxicLabels.length === 0) return [];
    const words = inputText
      .toLowerCase()
      .replace(/[^\w\s]/gi, '')
      .split(/\s+/)
      .filter(w => w.trim().length > 0);
    return toxicLabels.length > 0 ? words : [];
  };

  const toxicWords = getToxicWords();
  const graphData = buildGraphFromText(inputText, toxicWords);

  return (
    

PositiveWords Graph

Encouraging healthier communication with ML and graph insights.

Analyze Your Message