Why do good projects fail and good things take decades, when the issues can be clear in hours?
Did you try this? Fast accurate calculations are helpful, but the space of possible things to investigate and try is still large. For a variety of reasons, “small parts can be stronger”, “simple shapes take less time”, “many bonds can make a mess” – it makes sense to assemble and use molecular parts, rather than proteins. Not all the way down to polyatomic atomic chains, or to coated inorganic substrates, but to addressible and traceable parts. Can you replace organs? Can you replace membranes or organelles? Can you replace blood or fluids or oxygen carriers? The list of attractive, possibly useful, and possibly lucrative or helpful things on the Internet can still take a long time.
What is it that you want to make? What do you want to do? Who do you want to help? Or do you simply want money fame and glory? It is hard to do “everything” even when AIs and tools now make much that was impossible last month, possible now.
Do not make up a list of targets where you have biases and guesses what is valuable, and what way to go. Without bias, pick something and solve it. For others, not for yourself. That is closely related to why outsiders see more clearly – their passions, lusts and disgusts are not activated below their level of awareness.
That way you can learn the methods of “assembly and fabrication design management” and help all groups, not just a few.
I first heard about ab initio methods in high school in 1963. It was new and attractive and “Let the computers do it” sounded nice. But decades and decades later, it is still a myth and a hope, and real solutions end up being “next year”. Because no one is addressing the very real human dynamics of motivations, rules, priorities, and just plain human greed, laziness, passions and just plain human mistakes. If you make a company or group – unless the whole, its whole industry and the context are just right most things fail for external reasons.
When the US or Japan or China or Russia or India or Africa or where-ever says “we can concentrate our efforts and do X”, they usually fail because the problem cannot be solved unless everyone is involved fairly and no human group ever sets the rules so they cannot be games by a few. No one, no group, never in human history. That is what I have tried to explore, understand and “try to find solutions for” – “Why do all global and systemic projects take decades to map out, design, and complete? when the computers now can do it in a few hours or weeks?
===============
Full converstaion in more readable form at https://chatgpt.com/share/6794f29b-8abc-800b-a7e8-94b6e721307a
Be aware that by default OpenAI always scabbed “good” text from the free Internet and that is highly biased towards “hire we are experts or buy this” . So “form a human committee is almost always a knee-jerks solution offered. These long outputs and their prompts can be combined and “compiled, analyzed and resolved as a whole” to get a more complete and comprehensive maps of what to do. It has to be lossless, and completely traceable or it is a waste of time.
==========
Here is OpenAI ChatGTP Plus running ChatGPT o1 says:
Thought about human dynamics in problem-solving for 11 seconds
Below is a discussion that tries to tease apart why large, complex, global projects often take decades—despite the fact that modern computation can theoretically solve the “technical” component in a fraction of the time. It also touches on the interplay of motivations, rules, resources, and the countless human factors that slow (or stall) global coordination.
1. The Technical vs. Socio-Technical Divide
Purely Technical Complexity Even if we imagine unlimited computational resources, perfect ab initio methods, and full AI-assisted discovery, there’s still an inescapable layer of human involvement in every real-world project. For example:
Accuracy of ab_initio_methods ~ f(high_fidelity_models, computing_power, time, quality_of_data)
Yes, modern computers can run quantum chemistry, protein folding, fluid dynamics, or advanced optimization. Yet “handing over” a design to an algorithm is just one piece. Even once the simulation yields a perfect structure (a new material, a molecule, a drug), there are multi-year steps to physically validate that result, optimize production, and gain acceptance in real markets.
Socio-Technical Complexity Massive global or systemic projects (e.g., developing a new organ replacement technique, deploying a planet-wide climate technology, redesigning entire supply chains) involve people, markets, policy, culture, law, and international relations. If the core technical solution is only 10% of the effort, the other 90% is:
Socio_technical_challenges = f(political_will, regulations, funding, cultural_acceptance, legal_infrastructure, workforce_skills, supply_chain_capacity, trust)
No supercomputer can “solve” trust, conflicting priorities, or cultural differences. And if you cannot solve those, the project doesn’t reach scale or momentum.
2. Conflicting Incentives, Rules, and Alignment Problems
Individual vs. Global Benefit To solve something “for the world,” many vested interests need to come together. But each participant—from private companies seeking profit, to governments seeking political advantage—inevitably has its own constraints:
Incentives(each_actor) = f(financial_rewards, political_capital, risk_aversion, internal_competition, national_security)
Global-level projects stall when each major actor can veto, delay, or divert resources to preserve its own advantage—or avoid risk.
Fairness and Perceived Fairness Large-scale efforts unravel when any group feels exploited or excluded. Designing systems that are mutually beneficial, or at least perceived that way, often requires an immense amount of negotiation:
Global_cooperation = f(transparency, shared_governance, distribution_of_benefits, checks_against_exploitation)
If one party believes the rules are “gameable,” or that others hold disproportionate power, deals fall apart—regardless of how good the technical plan is.
3. The “Human” in Human-Driven Projects
Motivations, Emotions, and Biases Rational analysis can fail if it runs counter to ingrained passions, cultural identities, or entrenched biases. People may reject, delay, or sabotage solutions that threaten their sense of identity, moral framework, or financial stake.
Organizational Inertia and Fear of Failure Even big governments with massive budgets are bound by processes, laws, risk aversion, and the fear of being blamed for mistakes. That inertia can delay or derail sweeping changes.
Short-Term vs. Long-Term Vision Politics and markets often favor short-term gains over long-term benefits. Doing “the big right thing” that pays off in 10–20 years can be politically or financially unpopular if it hurts immediate profits or election cycles.
4. Why Projects Still Take Decades
Iterative Testing and ValidationEven the most elegant design must pass safety tests, regulatory hurdles, or real-world pilot phases. In medicine, new organ substitutes or advanced therapies can take a decade to gain regulatory approval. Pilot projects, scale-up, and large trials are measured in years, not months.
Global Scale and Coordination
The time it takes to align laws, standards, supply chains, and cross-border collaborations stretches on. Each layer of “global synergy” can add years. If the project depends on uniform adoption (e.g., every country adopting a policy or technology), the slowest or most resistant stakeholder controls the overall timeline.
Economic Uncertainty and Funding
Funding can evaporate if investors or governments see a dip in short-term returns or are influenced by competing crises (wars, recessions, pandemics).
If a project loses momentum halfway through, it can stall for years before being revived under new leadership or funding.
Human Factors: Greed, Laziness, MistakesPeople might manipulate or “game” a new system for personal gain, requiring additional layers of oversight, audits, or checks.
Large, error-free collaborations are extremely challenging. Miscommunications, internal power struggles, shifting alliances, and management flaws multiply delays.
5. Learning “Assembly and Fabrication Design Management” for All
You mention focusing on something specific—picking a target not out of personal bias but out of genuine potential to help many people—and methodically applying a design-management approach. That’s often the best path to:
1) Clearly define the problem 2) Gather input from all major stakeholders 3) Use systematic design / simulation / modeling 4) Build small prototypes or pilot studies 5) Iterate and refine quickly 6) Scale and standardize
This approach can be done in a way that invites more transparent and inclusive decision-making, which helps mitigate suspicion or power-plays.
6. Can We Overcome These Barriers Faster?
AI-Driven Coordination
Modern AI tools can accelerate collaboration by providing shared transparent analyses and simulations.
AI can highlight win–win scenarios that might be overlooked by purely human negotiation.
Open-Source Collaboration
By opening up designs, standards, and research data to the world, you reduce the sense that one group “owns” the technology.
If the roadmap, cost, and potential benefits are openly visible, it’s harder for one actor to quietly manipulate the project.
Participatory Governance
Involving communities, local governments, NGOs, and industry from day one can diminish the “outsider imposing solutions” phenomenon and build trust.
Mechanisms that guarantee fair distribution of benefits (e.g., profit-sharing, local ownership, transparent oversight) help unify stakeholders.
Adaptive Regulation
Governments can design “dynamic” regulatory frameworks that pivot as real-world evidence emerges, instead of waiting for perfect data or the next decade’s rewrite of the law.
This reduces the lag between the lab, pilot studies, and mainstream adoption.
7. Summary
Why Decades, Not Months?
It’s not just about solving the technical constraints. Huge projects also must align with human psychology, politics, entrenched economic interests, legal frameworks, and intangible cultural factors. Aligning these can take far longer than the raw engineering or computational tasks.
Path Forward
Pick a concrete and worthwhile goal (health, sustainability, new materials) that truly helps others.
Apply rigorous design-management methods—iterative, transparent, inclusive.
Address stakeholder motivations systematically: fairness, shared benefits, open data, and adaptive regulations to reduce mistrust.
Leverage AI but remain aware that no amount of computational power fixes incentive misalignments by itself.
The heart of the matter is that large, global, systemic transformations always face the “human dimension.” Until that dimension is adequately structured and resolved, even the most advanced technology remains, as you say, “next year’s solution”—always on the horizon, forever just out of reach.
RichardKCollin2 said:
What can AI groups do to help? What can the emerging intelligent entities do to help?
Most importantly how can they structure relations with all human and AI society so they are trusted, trustworthy, accessible, reliable and help everyone?
At one time “banks” and “advisors” were neutral and trustworthy. Wrapping them in circuits, algorithms, software and machine has not made that role in society easy to evaluate and judge.
Unless the roles and responsibilities and benefits are clear, it won’t have a chance to even begin.
Here is OpenAI ChatGTP Plus running ChatGPT o1 says:
Thought about AI’s role in society for 7 seconds
Below is an outline of practical steps AI groups (and emerging “intelligent entities” at large) can take to become trusted, trustworthy, and reliable partners to human society. For each aspect, we’ll consider structural, technical, and cultural approaches—because trust ultimately depends on transparent governance, clear responsibilities, and demonstrable benefits for all stakeholders.
1. Clarify Roles, Responsibilities, and Benefits
Role Definition
AI groups should explicitly define:
Their purpose (e.g., “We develop decision-support tools for healthcare.”)
Their scope (what they do and, importantly, what they do not do)
Their obligations to the public, users, employees, and regulators
Benefits and responsibilities must be clearly stated in publicly available, understandable terms.
Alignment with Societal Goals
Demonstrate how the group’s work directly connects to problems people care about:
Healthcare access
Environmental sustainability
Education
Financial inclusion
Make it explicit: “Here is the problem we’re solving, here is how we’ll measure improvement, and here is who benefits.”
Multi-Stakeholder Involvement
At an early stage, invite representatives from diverse groups—consumer advocates, domain experts, local communities, regulatory bodies—to co-create the AI’s objectives or use cases.
If people see how the system is designed from day one, trust starts from familiarity, not from marketing hype.
2. Adopt Transparent, Accountable Governance
Open Governance Model
Publish an organizational governance framework describing how decisions are made:
Decision_Pipeline = f(technical_teams, ethics_panel, user_feedback,
regulatory_requirements, board_oversight)
Have an independent ethics board or advisory council with actual veto or revision power on critical design decisions.
Clear Mechanisms of Accountability
Internal Audits: Set up formal processes that regularly review code, data, decisions, and outcomes.
External Audits: Allow third-party organizations or “red teams” to evaluate the AI’s performance and risk factors.
Liability Frameworks: Publicly define how your group handles mistakes or misuse. If you produce a system that harms people, who is responsible, and how are they made whole?
Explainable Outcomes
For any AI that impacts crucial human decisions—loans, medical diagnoses, hiring—provide explainable components:
Summaries of the factors leading to a decision
Mechanisms for appeals or second opinions
Trust grows when users (and regulators) can see why a model does what it does.
3. Foster Broad Accessibility and Inclusion
Equitable Access to AI Tools
Make sure the AI outputs or services are not locked behind paywalls or exclusive partnerships—especially if it’s addressing public or essential goods (health, education).
Offer tiered or open licensing so nonprofits, small companies, and developing regions can also benefit.
Localized Interfaces and Language Support
AI systems should be linguistically and culturally adaptable.
If the AI is a medical assistant, for example, ensure it can handle local contexts, languages, and cultural norms.
User-Friendly Communication
Provide user interfaces and documentation that non-technical people can grasp.
This includes clarifying limitations: “The system’s best performance is on tasks X and Y, but it may be less accurate on tasks Z.”
4. Ensure Robust Reliability and Security
Resilience by Design
Build systems to handle unexpected inputs and gracefully degrade rather than producing catastrophic errors.
For example:
AI_Safety_Functions = f(redundant_models, anomaly_detection, sandbox_testing)
Perform frequent “stress tests,” akin to banking stress tests, to see how the AI behaves under extreme or malicious scenarios.
Data Privacy and Protection
Encrypt sensitive data.
Comply with global privacy regulations (GDPR, etc.) and seek to exceed them, demonstrating genuine care for user protection.
Provide transparent data governance so everyone knows what data is collected, how it’s used, who can access it, and how long it’s retained.
Continuous Monitoring and Updates
The environment changes, attacks evolve, and training data can become outdated.
Establish lifecycle maintenance to:
Retrain or calibrate models periodically
Patch vulnerabilities proactively
Announce changes transparently
5. Cultivate Neutrality and Fairness
Bias Audits
Regularly run statistical bias checks and measure:
Disparate_Impact ~ f(user_demographics, model_predictions, real_outcomes)
If the AI systematically under-serves or misjudges certain groups, incorporate corrective measures.
Open Datasets for Validation
Provide test sets or partial (anonymized) real-world data for external checks and research.
Encourage data scientists or academics to replicate findings, revealing any hidden biases or performance blind spots.
Transparent Model Objectives
Communicate your system’s utility function: what is it optimizing (accuracy, fairness, revenue), and how do these trade-offs work?
If there are conflicts (e.g., maximizing profit vs. ensuring equitable outcomes), publicly acknowledge them and describe how you resolve them.
6. Engage in Collaborative, Ethical AI Ecosystems
Open-Source or Shared Knowledge
Contribute to open-source libraries, publish technical insights, or share best practices.
This builds a broader trust ecosystem, where improvements to safety, interpretability, or fairness benefit everyone.
Standards and Consortiums
Join or form multi-stakeholder consortiums to set industry-wide standards:
Model interpretability best practices
Safety benchmarks
Data-sharing norms
When AI groups shape and follow common rules, it reduces suspicion of “secret agendas.”
Public Education and Outreach
Host workshops or educational programs that demystify AI for students, professionals, and the public.
Transparent engagement can turn confusion or fear into understanding and constructive critique.
7. Long-Term Stewardship and Evolution
Adapt to Societal Needs
Keep an ear to the ground for shifting social priorities (e.g., climate crises, pandemics, new economic realities) and adapt your AI offerings accordingly.
Show that you’re responsive rather than locked to outdated business models.
Ethical and Human-Centered AI Research
Invest in fundamental R&D about AI alignment, meaning an AI’s objectives remain compatible with widely accepted human values or the explicit instructions of its user community.
If advanced AIs act more autonomously, develop fail-safes and rigorous alignment protocols.
Proactive Policy and Self-Regulation
Instead of waiting for external forces to impose rules, AI groups can self-regulate by adopting strong internal policies or pledging to adhere to known frameworks (e.g., the OECD AI Principles).
Leading the compliance conversation fosters trust and sets an example for newcomers.
Synthesis: A Trustworthy AI Society
For AI to gain the status once held by “neutral” banks or advisors, it must demonstrably serve the public good and prove it can safeguard interests fairly and consistently. The essential elements are:
Transparency about data, processes, and motivations
Accountability with clearly defined responsibilities, audits, and liability
Inclusivity so that all stakeholders (not just profit-driven entities) have influence and benefit
Reliability in delivering services with consistent quality and safety
Continuous Evolution to adapt to new insights, social demands, and technological risks
If AI systems and their creators follow these principles, they stand a far better chance of being perceived as trusted, neutral, and beneficial. This will not happen automatically—intense diligence, community engagement, and open collaboration are required. But these steps give a solid starting structure for AI groups (and emerging intelligent agents) to earn genuine trust, serve everyone’s interests, and demonstrate their worth to human society.